A calibration hierarchy for risk models was defined: from utopia to empirical data

BJ Calster, Daan Nieboer, Yvonne Vergouwe, B De Cock, MJ Pencina, Ewout Steyerberg

Research output: Contribution to journalArticleAcademicpeer-review

246 Citations (Scopus)

Abstract

Objective: Calibrated risk models are vital for valid decision support. We define four levels of calibration and describe implications for model development and external validation of predictions. Study Design and Setting: We present results based on simulated data sets. Results: A common definition of calibration is "having an event rate of R% among patients with a predicted risk of R%," which we refer to as "moderate calibration." Weaker forms of calibration only require the average predicted risk (mean calibration) or the average prediction effects (weak calibration) to be correct. "Strong calibration" requires that the event rate equals the predicted risk for every covariate pattern. This implies that the model is fully correct for the validation setting. We argue that this is unrealistic: the model type may be incorrect, the linear predictor is only asymptotically unbiased, and all nonlinear and interaction effects should be correctly modeled. In addition, we prove that moderate calibration guarantees nonharmful decision making. Finally, results indicate that a flexible assessment of calibration in small validation data sets is problematic. Conclusion: Strong calibration is desirable for individualized decision support but unrealistic and counter productive by stimulating the development of overly complex models. Model development and external validation should focus on moderate calibration. (C) 2016 Elsevier Inc. All rights reserved.
Original languageUndefined/Unknown
Pages (from-to)167-176
Number of pages10
JournalJournal of Clinical Epidemiology
Volume74
DOIs
Publication statusPublished - 2016

Cite this