Abstract
Background: Risk-based analyses are increasingly popular for understanding heterogeneous treatment effects (HTEs) in clinical trials. For time-to-event analyses, the assumption that high-risk patients benefit most on the clinically important absolute scale when hazard ratios (HRs) are constant across risk strata might not hold. Absolute treatment effects can be measured as either the risk difference (RD) at a given time point or the difference in the restricted mean survival time (ΔRMST), which aligns more closely with utilitarian medical decision-making frameworks. We examined risk-based HTE analyses strata in time-to-event analyses to identify the patterns of absolute HTE across risk strata and whether the ΔRMST may lead to better treatment decisions than the RD.
Methods: Using artificial and empirical time-to-event data, we compared the RD—the difference between Kaplan–Meier estimates at a certain time point—and the ΔRMST—the area between the Kaplan–Meier curves—across risk strata and show how these metrics can prioritize different subgroups for treatment. We explored scenarios involving constant HRs while varying both the overall event rates and the discrimination of the risk models.
Results: When event rates and discrimination were low, the RD and the ΔRMST increased monotonically, with high-risk patients benefitting more than low-risk patients. As the event rate increased and/or discrimination increased, a ‘sweet spot’ pattern emerged: intermediate-risk patients benefit more than low-risk and high-risk patients. When the RD was used, the ‘sweet spot’ pattern emerged, even in circumstances in which the ΔRMST increased across the risk groups, thus understating the benefit for higher-risk patients and potentially leading to treatment mistargeting.
Conclusion: The pattern of HTE characterized by the RD may diverge substantially from the ΔRMST, potentially leading to treatment mistargeting. Therefore, we recommend the ΔRMST for assessing the absolute HTE in time-to-event data.
Methods: Using artificial and empirical time-to-event data, we compared the RD—the difference between Kaplan–Meier estimates at a certain time point—and the ΔRMST—the area between the Kaplan–Meier curves—across risk strata and show how these metrics can prioritize different subgroups for treatment. We explored scenarios involving constant HRs while varying both the overall event rates and the discrimination of the risk models.
Results: When event rates and discrimination were low, the RD and the ΔRMST increased monotonically, with high-risk patients benefitting more than low-risk patients. As the event rate increased and/or discrimination increased, a ‘sweet spot’ pattern emerged: intermediate-risk patients benefit more than low-risk and high-risk patients. When the RD was used, the ‘sweet spot’ pattern emerged, even in circumstances in which the ΔRMST increased across the risk groups, thus understating the benefit for higher-risk patients and potentially leading to treatment mistargeting.
Conclusion: The pattern of HTE characterized by the RD may diverge substantially from the ΔRMST, potentially leading to treatment mistargeting. Therefore, we recommend the ΔRMST for assessing the absolute HTE in time-to-event data.
| Original language | English |
|---|---|
| Article number | dyaf191 |
| Journal | International Journal of Epidemiology |
| Volume | 54 |
| Issue number | 6 |
| DOIs | |
| Publication status | Published - 29 Nov 2025 |
Bibliographical note
Publisher Copyright:© The Author(s) 2025. Published by Oxford University Press on behalf of the International Epidemiological Association. All rights reserved.
Fingerprint
Dive into the research topics of 'Diverging conclusions from risk difference and difference in restricted mean survival time in quantifying absolute treatment effect heterogeneity'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver