Explainability's Gain is Optimality's Loss? — How Explanations Bias Decision-making

Charles Wan*, Rodrigo Crisostomo Pereira Belo, Leid Zejnilović

*Corresponding author for this work

Research output: Chapter/Conference proceedingConference proceedingAcademicpeer-review

Abstract

Decisions in organizations are about evaluating alternatives and choosing the one that would best serve organizational goals. To the extent that the evaluation of alternatives could be formulated as a predictive task with appropriate metrics, machine learning algorithms are increasingly being used to improve the efficiency of the process. Explanations help to facilitate communication between the algorithm and the human decision-maker, making it easier for the latter to interpret and make decisions on the basis of predictions by the former. Feature-based explanations' semantics of causal models, however, induce leakage from the decision-maker's prior beliefs. Our findings from a field experiment demonstrate empirically how this leads to confirmation bias and disparate impact on the decision-maker's confidence in the predictions. Such differences can lead to sub-optimal and biased decision outcomes.

Original languageEnglish
Title of host publicationAIES '22: Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society
Place of PublicationNew York, NY, United States
Pages778-787
Number of pages10
ISBN (Electronic)9781450392471
DOIs
Publication statusPublished - 27 Jul 2022

Bibliographical note

Funding Information:
This work was funded by Fundação para a Ciência e a Tecnologia (UIDB/00124/2020, UIDP/00124/2020 and Social Sciences DataLab - PINFRA/22209/2016), POR Lisboa and POR Norte (Social Sciences DataLab, PINFRA/22209/2016).

Publisher Copyright:
© 2022 ACM.

Fingerprint

Dive into the research topics of 'Explainability's Gain is Optimality's Loss? — How Explanations Bias Decision-making'. Together they form a unique fingerprint.

Cite this