Why rankings of biomedical image analysis competitions should be interpreted with care

L Maier-Hein, M Eisenmann, A Reinke, S Onogur, M Stankovic, P Scholz, T Arbel, H Bogunovic, AP Bradley, A Carass, C Feldmann, AF Frangi, PM Full, BTJ Ginneken, A Hanbury, K Honauer, M Kozubek, BA Landman, K Marz, O MaierK Maier-Hein, BH Menze, Henning Müller, PF Neher, Wiro Niessen, N Rajpoot, GC Sharp, K Sirinukunwattana, S Speidel, C Stock, D Stoyanov, AA Taha, F Van der Sommen, CW Wang, MA Weber, G Zheng, P Jannin, A Kopp-Schneider

Research output: Contribution to journalArticleAcademicpeer-review

197 Citations (Scopus)
10 Downloads (Pure)

Abstract

International challenges have become the standard for validation of biomedical image analysis methods. Given their scientific impact, it is surprising that a critical analysis of common practices related to the organization of challenges has not yet been performed. In this paper, we present a comprehensive analysis of biomedical image analysis challenges conducted up to now. We demonstrate the importance of challenges and show that the lack of quality control has critical consequences. First, reproducibility and interpretation of the results is often hampered as only a fraction of relevant information is typically provided. Second, the rank of an algorithm is generally not robust to a number of variables such as the test data used for validation, the ranking scheme applied and the observers that make the reference annotations. To overcome these problems, we recommend best practice guidelines and define open research questions to be addressed in the future.
Original languageEnglish
Article number5217
JournalNature Communications
Volume9
DOIs
Publication statusPublished - 6 Dec 2018

Research programs

  • EMC NIHES-03-30-02
  • EMC NIHES-03-30-03

Fingerprint

Dive into the research topics of 'Why rankings of biomedical image analysis competitions should be interpreted with care'. Together they form a unique fingerprint.

Cite this