TY - JOUR
T1 - Designing interpretable deep learning applications for functional genomics
T2 - a quantitative analysis
AU - van Hilten, Arno
AU - Katz, Sonja
AU - Saccenti, Edoardo
AU - Niessen, Wiro J.
AU - Roshchupkin, Gennady
N1 - Publisher Copyright:
© 2024 The Author(s).
PY - 2024/9
Y1 - 2024/9
N2 - Deep learning applications have had a profound impact on many scientific fields, including functional genomics. Deep learning models can learn complex interactions between and within omics data; however, interpreting and explaining these models can be challenging. Interpretability is essential not only to help progress our understanding of the biological mechanisms underlying traits and diseases but also for establishing trust in these model's efficacy for healthcare applications. Recognizing this importance, recent years have seen the development of numerous diverse interpretability strategies, making it increasingly difficult to navigate the field. In this review, we present a quantitative analysis of the challenges arising when designing interpretable deep learning solutions in functional genomics. We explore design choices related to the characteristics of genomics data, the neural network architectures applied, and strategies for interpretation. By quantifying the current state of the field with a predefined set of criteria, we find the most frequent solutions, highlight exceptional examples, and identify unexplored opportunities for developing interpretable deep learning models in genomics.
AB - Deep learning applications have had a profound impact on many scientific fields, including functional genomics. Deep learning models can learn complex interactions between and within omics data; however, interpreting and explaining these models can be challenging. Interpretability is essential not only to help progress our understanding of the biological mechanisms underlying traits and diseases but also for establishing trust in these model's efficacy for healthcare applications. Recognizing this importance, recent years have seen the development of numerous diverse interpretability strategies, making it increasingly difficult to navigate the field. In this review, we present a quantitative analysis of the challenges arising when designing interpretable deep learning solutions in functional genomics. We explore design choices related to the characteristics of genomics data, the neural network architectures applied, and strategies for interpretation. By quantifying the current state of the field with a predefined set of criteria, we find the most frequent solutions, highlight exceptional examples, and identify unexplored opportunities for developing interpretable deep learning models in genomics.
UR - https://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=eur_pure&SrcAuth=WosAPI&KeyUT=WOS:001314678700006&DestLinkType=FullRecord&DestApp=WOS_CPL
U2 - 10.1093/bib/bbae449
DO - 10.1093/bib/bbae449
M3 - Review article
C2 - 39293804
SN - 1467-5463
VL - 25
JO - Briefings in Bioinformatics
JF - Briefings in Bioinformatics
IS - 5
M1 - bbae449
ER -