Tuesday, Aug 5: 10:30 AM - 12:20 PM
4115
Contributed Papers
Music City Center
Room: CC-106B
Main Sponsor
Biometrics Section
Presentations
Evaluating the impact of medical advancements on cancer patient survival is complex, as it can be influenced by various factors. Individual research progress in different cancer entities leads to distinct patterns of survival estimates over time. A competing risks analysis was used to disentangle yearly survival trends into their constituent parts, i.e., cancer-related deaths and those attributed to other causes. We examined how hazard ratios changed historically over the last 46 years, utilizing patient data for acute myeloid leukemia and four solid tumors (ovarian, testicular, lung, and breast cancer) extracted from the Surveillance, Epidemiology, and End Results research data (Nov 2023 Sub (1975-2021)). Our proof-of-concept analysis allowed us to characterize the different patterns of progress for the selected cancers. Noticeably, patterns align with past advances in treating these malignancies. Survival estimates for the competing event also show disease specific profiles, but should be interpreted with more care. Overall, our analysis underscores the potential of competing risks analyses to provide valuable insights into the population-level benefits of medical breakthroughs.
Keywords
Competing risk analysis
Subdistribution hazard ratios
History of cancer survival trends
Indirect analysis of medical advancements in treatment and care
SEER data analysis
"Patterns of progress"
In the setting where individual subjects have time-to-event data for multiple events, estimating the correlation between the times to these events can be challenging in the presence of right censoring. Standard association measures such as linear correlation cannot be estimated when not all failure times are observed. In order to allow for the estimation of correlation between event times we propose the use of counting process martingales indexed by time. This method can be used even in the presence of right censoring. In order to highlight the utility of this method we use data from hospitalized patients who are determined to be at risk for deterioration. In this case researchers are interested in the correlation between the time to certain actions such as the ordering of labs or giving medication. However, because not all actions are taken for every patient standard techniques are not possible. We highlight how using counting process martingales can be useful in this scenario, and discuss the different ways that this method can be used depending on the type of censoring.
Keywords
Survival analysis
Correlation
Martingales
Censoring
Cumulative hazard
Counting process
The problem of "truncation by death" commonly arises in clinical studies: subjects may die before their follow-up assessment, resulting in undefined clinical outcomes. In this work, rather than treating death as a mechanism through which clinical outcomes are missing, we advocate treating death as part of the outcome measure. We propose using the survival-incorporated median-the median of a composite outcome combining death and clinical outcomes-to summarize the clinical benefit of treatment. Combining Inverse Probability of Treatment Weighting and a quantile estimation procedure, we propose an estimation method for the survival-incorporated median, applicable to both point treatment settings and time-varying treatment settings. We prove the consistency and the asymptotic normality for the proposed estimator. We apply this method to estimate the cognitive effects of statins in participants from Long Life Family Study, an observational study involving over 4953 older adults with familial longevity. Through this application, we aim to not only contribute to the clinical understanding of the cognitive effects of statins, but also offer insights into analyzing clinical outcomes.
Keywords
Causal inference
Truncation by death
Survival
Quantile estimation
Observational study
Time-to-event data with long-term survivors (L-TS), subjects who never experience the event, occur in diverse fields (e.g., cancer, credit default risk, recidivism). Conventional two-sample tests (e.g., log-rank test [LR]) ignore L-TS, and several alternatives exist, but they have not been comprehensively compared. We compared 7 methods via simulation: LR, three weighted log-rank tests (WLR), two adaptive tests (two-stage or Yang-Prentice [YP]), and a correctly specified parametric model. We assessed the impact of sample size and follow-up time on type I error and power across varying effect sizes. When one or both groups lack L-TS, the LR, WLR and YP typically have the highest power, but order varies. When both groups have L-TS, these tests have non-monotonic power as a function of follow-up time, but parametric models have monotonic increasing power and the highest power at the longest follow-up time. Patterns are consistent across sample sizes. We explain non-monotonicity by differential deviation from proportional hazards depending on follow-up time. This impacts study planning in the setting of L-TS, as naïve use of conventional LR can have counterintuitive properties.
Keywords
Survival analysis
Long-term survivors
Log-rank test
Mixture cure model
Adaptive tests
Composite endpoints, which combine two or more distinct outcomes, are frequently used in clinical trials to enhance the event rate and improve the statistical power. In the recent literature, the while-alive cumulative frequency measure offers a strong alternative to define composite survival outcomes, by relating the average event rate to the survival time. Although non-parametric methods have been proposed for two-sample comparisons between cumulative frequency measures in clinical trials, limited attention has been given to regression methods that directly address time-varying effects in while-alive measures for composite survival outcomes. Motivated by an individually randomized trial (HF-ACTION) and a cluster randomized trial (STRIDE), we address this gap by developing a regression framework for while-alive measures for composite survival outcomes that include a terminal component event. Our regression approach uses splines to model time-varying association between covariates and a while-alive loss rate of all component events, and can be applied to both independent and clustered data. We derive the asymptotic properties of the regression estimator in each setting and evaluate its performance through simulations. Finally, we apply our regression method to analyze data from the HF-ACTION individually randomized trial and the STRIDE cluster randomized trial. The proposed methods are implemented in the WAreg R package.
Keywords
Composite endpoint
clustered randomized trial
spline
time-dependent effect
while-alive
Co-Author(s)
Fan Li, Yale School of Public Health
Hajime Uno, Dana-Farber Cancer Institute
First Author
Xi Fang, Yale University
Presenting Author
Xi Fang, Yale University
The Win ratio is an alternative composite measure to time to first event for analyzing survival data with multiple events. It has been increasingly used in clinical research to compare treatment groups. Unlike composite outcomes based on time to first event, which assume all events have an equal impact, the Win ratio prioritizes more important events. The Win ratio is defined as the ratio of wins to losses in the treatment group, based on pairwise survival comparisons. The rule determining the winner is called the Win function, which corresponds to the inverse hazard ratio in the Cox model when calculating the Win ratio based on time to first event. By incorporating event prioritization into the Win function, a Win ratio that accounts for event priority can be defined.
However, high-priority events do not always contribute significantly to the Win ratio, as more frequent events tend to have a greater impact. This study proposes a weighted method to control event contributions in the Win ratio, enabling more flexible and interpretable treatment comparisons.
Keywords
win taio
survival analysis
multiple events