Fairness and bias correction in machine learning for depression prediction across four study populations

Vien Ngoc Dang*, Anna Cascarano, Rosa H. Mulder, Charlotte Cecil, Maria A. Zuluaga, Jerónimo Hernández-González, Karim Lekadir

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

6 Downloads (Pure)

Abstract

A significant level of stigma and inequality exists in mental healthcare, especially in under-served populations. Inequalities are reflected in the data collected for scientific purposes. When not properly accounted for, machine learning (ML) models learned from data can reinforce these structural inequalities or biases. Here, we present a systematic study of bias in ML models designed to predict depression in four different case studies covering different countries and populations. We find that standard ML approaches regularly present biased behaviors. We also show that mitigation techniques, both standard and our own post-hoc method, can be effective in reducing the level of unfair bias. There is no one best ML model for depression prediction that provides equality of outcomes. This emphasizes the importance of analyzing fairness during model selection and transparent reporting about the impact of debiasing interventions. Finally, we also identify positive habits and open challenges that practitioners could follow to enhance fairness in their models.

Original languageEnglish
Article number7848
JournalScientific Reports
Volume14
Issue number1
DOIs
Publication statusPublished - 3 Apr 2024

Bibliographical note

Publisher Copyright:
© The Author(s) 2024.

Fingerprint

Dive into the research topics of 'Fairness and bias correction in machine learning for depression prediction across four study populations'. Together they form a unique fingerprint.

Cite this