Abstract
There is a strong interest in “opening the black box of AI”, and as a result the field of eXplainable Artificial Intelligence (XAI) has gained a lot of attention in recent years. However, many explainable AI methods have not yet been applied and tested at scale on real-world data. This thesis investigated different types of explanations to overcome the transparency problem of AI in health care. Both explainable modeling (i.e. intrinsically interpretable models) as well as post-hoc explanations (i.e. explanations accompanying the model) are explored across a diverse set of prediction tasks in various real-world databases (e.g. Dutch general practitioner and US claims data). This thesis has demonstrated that: i) hybrid approaches combining data- and knowledge-based learning can help produce more interpretable models, ii) post-hoc explanation methods currently suffer from several limitations impeding the understandability of their explanations, and iii) explainable AI design choices need to be made on a case-by-case basis as the trade-offs and the explanation needs differ per task. In conclusion, we argue that explainable AI can be instrumental to develop responsible AI, but its current limitations may hinder true understandability.
| Original language | English |
|---|---|
| Awarding Institution |
|
| Supervisors/Advisors |
|
| Award date | 9 Apr 2025 |
| Place of Publication | Rotterdam |
| Print ISBNs | 978-94-6510-505-5 |
| Publication status | Published - 9 Apr 2025 |
Fingerprint
Dive into the research topics of 'Opening the black box of explainability: Trade-offs in the design of clinical prediction models'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver