Generalizability of Cardiovascular Disease Clinical Prediction Models: 158 Independent External Validations of 104 Unique Models

Gaurav Gulati, Jenica Upshaw, Benjamin S Wessler, Riley J Brazil, Jason Nelson, David van Klaveren, Christine M Lundquist, Jinny G Park, Hannah McGinnes, Ewout W Steyerberg, Ben Van Calster, David M Kent

Research output: Contribution to journalArticleAcademicpeer-review

32 Citations (Scopus)
17 Downloads (Pure)

Abstract

Background: While clinical prediction models (CPMs) are used increasingly commonly to guide patient care, the performance and clinical utility of these CPMs in new patient cohorts is poorly understood. Methods: We performed 158 external validations of 104 unique CPMs across 3 domains of cardiovascular disease (primary prevention, acute coronary syndrome, and heart failure). Validations were performed in publicly available clinical trial cohorts and model performance was assessed using measures of discrimination, calibration, and net benefit. To explore potential reasons for poor model performance, CPM-clinical trial cohort pairs were stratified based on relatedness, a domain-specific set of characteristics to qualitatively grade the similarity of derivation and validation patient populations. We also examined the model-based C-statistic to assess whether changes in discrimination were because of differences in case-mix between the derivation and validation samples. The impact of model updating on model performance was also assessed. Results: Discrimination decreased significantly between model derivation (0.76 [interquartile range 0.73-0.78]) and validation (0.64 [interquartile range 0.60-0.67], P<0.001), but approximately half of this decrease was because of narrower case-mix in the validation samples. CPMs had better discrimination when tested in related compared with distantly related trial cohorts. Calibration slope was also significantly higher in related trial cohorts (0.77 [interquartile range, 0.59-0.90]) than distantly related cohorts (0.59 [interquartile range 0.43-0.73], P=0.001). When considering the full range of possible decision thresholds between half and twice the outcome incidence, 91% of models had a risk of harm (net benefit below default strategy) at some threshold; this risk could be reduced substantially via updating model intercept, calibration slope, or complete re-estimation. Conclusions: There are significant decreases in model performance when applying cardiovascular disease CPMs to new patient populations, resulting in substantial risk of harm. Model updating can mitigate these risks. Care should be taken when using CPMs to guide clinical decision-making.

Original languageEnglish
Pages (from-to)e008487
JournalCirculation. Cardiovascular quality and outcomes
Volume15
Issue number4
DOIs
Publication statusPublished - Apr 2022

Bibliographical note

Funding Information:
Research reported in this work was funded through a Patient-Centered Outcomes Research Institute (PCORI) Award (ME-1606-35555).

Publisher Copyright:
© 2022 Lippincott Williams and Wilkins. All rights reserved.

Fingerprint

Dive into the research topics of 'Generalizability of Cardiovascular Disease Clinical Prediction Models: 158 Independent External Validations of 104 Unique Models'. Together they form a unique fingerprint.

Cite this