TY - JOUR
T1 - Understanding, explaining, and utilizing medical artificial intelligence
AU - Cadario, Romain
AU - Longoni, Chiara
AU - Morewedge, Carey K.
N1 - Acknowledgements:
Before joining Rotterdam School of Management, R.C. received funding from the Susilo Institute for Ethics in the Global Economy, Questrom School of Business, Boston University. R.C. thanks the Erasmus Research Institute in Management for providing funding for data collection. The authors thank T. Sangers, M. Wakkee, A. Udrea and SkinVision for their feedback on human and algorithmic decision processes.
Publisher Copyright:
© 2021, The Author(s), under exclusive licence to Springer Nature Limited.
PY - 2021/6/28
Y1 - 2021/6/28
N2 - Medical artificial intelligence is cost-effective and scalable and often outperforms human providers, yet people are reluctant to use it. We show that resistance to the utilization of medical artificial intelligence is driven by both the subjective difficulty of understanding algorithms (the perception that they are a ‘black box’) and by an illusory subjective understanding of human medical decision-making. In five pre-registered experiments (1–3B: N = 2,699), we find that people exhibit an illusory understanding of human medical decision-making (study 1). This leads people to believe they better understand decisions made by human than algorithmic healthcare providers (studies 2A,B), which makes them more reluctant to utilize algorithmic than human providers (studies 3A,B). Fortunately, brief interventions that increase subjective understanding of algorithmic decision processes increase willingness to utilize algorithmic healthcare providers (studies 3A,B). A sixth study on Google Ads for an algorithmic skin cancer detection app finds that the effectiveness of such interventions generalizes to field settings (study 4: N = 14,013).
AB - Medical artificial intelligence is cost-effective and scalable and often outperforms human providers, yet people are reluctant to use it. We show that resistance to the utilization of medical artificial intelligence is driven by both the subjective difficulty of understanding algorithms (the perception that they are a ‘black box’) and by an illusory subjective understanding of human medical decision-making. In five pre-registered experiments (1–3B: N = 2,699), we find that people exhibit an illusory understanding of human medical decision-making (study 1). This leads people to believe they better understand decisions made by human than algorithmic healthcare providers (studies 2A,B), which makes them more reluctant to utilize algorithmic than human providers (studies 3A,B). Fortunately, brief interventions that increase subjective understanding of algorithmic decision processes increase willingness to utilize algorithmic healthcare providers (studies 3A,B). A sixth study on Google Ads for an algorithmic skin cancer detection app finds that the effectiveness of such interventions generalizes to field settings (study 4: N = 14,013).
UR - http://www.scopus.com/inward/record.url?scp=85119983853&partnerID=8YFLogxK
U2 - 10.1038/s41562-021-01146-0
DO - 10.1038/s41562-021-01146-0
M3 - Article
AN - SCOPUS:85119983853
VL - 5
SP - 1636
EP - 1642
JO - Nature Human Behaviour
JF - Nature Human Behaviour
SN - 2397-3374
IS - 12
ER -