Understanding, explaining, and utilizing medical artificial intelligence

Romain Cadario*, Chiara Longoni, Carey K. Morewedge

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

6 Citations (Scopus)
3 Downloads (Pure)

Abstract

Medical artificial intelligence is cost-effective and scalable and often outperforms human providers, yet people are reluctant to use it. We show that resistance to the utilization of medical artificial intelligence is driven by both the subjective difficulty of understanding algorithms (the perception that they are a ‘black box’) and by an illusory subjective understanding of human medical decision-making. In five pre-registered experiments (1–3B: N = 2,699), we find that people exhibit an illusory understanding of human medical decision-making (study 1). This leads people to believe they better understand decisions made by human than algorithmic healthcare providers (studies 2A,B), which makes them more reluctant to utilize algorithmic than human providers (studies 3A,B). Fortunately, brief interventions that increase subjective understanding of algorithmic decision processes increase willingness to utilize algorithmic healthcare providers (studies 3A,B). A sixth study on Google Ads for an algorithmic skin cancer detection app finds that the effectiveness of such interventions generalizes to field settings (study 4: N = 14,013).

Original languageEnglish
Pages (from-to)1636-1642
Number of pages7
JournalNature Human Behaviour
Volume5
Issue number12
DOIs
Publication statusPublished - 28 Jun 2021

Fingerprint

Dive into the research topics of 'Understanding, explaining, and utilizing medical artificial intelligence'. Together they form a unique fingerprint.

Cite this