Tuesday, Aug 5: 2:00 PM - 3:50 PM
4118
Contributed Papers
Music City Center
Room: CC-208B
Main Sponsor
Biopharmaceutical Section
Presentations
Prognostic scores (PS) can improve statistical efficiency in clinical trials by reducing the variance of treatment effect estimates, leading to trial derisking and cost savings. We developed three PS's using 78 baseline covariates for the prediction of 18-month changes in the Clinical Dementia Rating Scale – Sum of Boxes (CDR-SB) using a harmonized cohort of patients with Alzheimer's disease (AD), pooled from both randomized clinical trials (RCT) and real-world databases (n = 1549). In an internal test set (n = 398), a stacked ensemble model achieved a Pearson correlation of 0.51 between predicted and observed 18-month CDR-SB changes. In a held-out RCT test set (n = 650), PS adjustment reduced treatment effect variance by 22%, yielding a power increase from 80% to 87% and an effective sample size increase of 20%. Simpler alternatives, such as a linear PS or treating baseline AD Assessment Scale-Cognitive (ADAS-Cog) as a PS, also provided meaningful variance reduction with additional benefits of improved clinical interpretability. Our findings support the use of PS to enhance statistical efficiency in clinical trials, and further validation on external datasets is recommended.
Keywords
Covariate Adjustment
Prognostic Score
Clinical Trial Efficiency
Machine Learning
Alzheimer's Disease
Real World Data
In adaptive design clinical trials, controlling type 1 error is critical to ensuring the validity of statistical inferences. Traditional methods for type 1 error control, such as the Bonferroni and Holmes procedures, have been widely used to adjust for multiple comparisons. However, these methods often lead to conservative results, limiting statistical power and increasing the risk of Type II errors. Additionally, they do not adequately account for the dynamic nature of adaptive trials, where interim analyses and information updating are integral to the study design. This paper introduces a type 1 error control approach designed specifically for adaptive trials with multiple correlated comparisons. Our method uses an iterative approach to identify an optimal level of significance at which to test the individual hypotheses. This balances the family wise error rate while maintaining a desired level of statistical power. We compared the efficiency of our technique with more traditional methods, as well as more recent simulation-based approaches, through simulation studies. We further integrate our approach with Go/No-go decision making with multiple endpoints.
Keywords
Multiplicity adjustment
group sequential design
correlated endpoints
Progression-free survival (PFS) and overall survival (OS) are often used as dual primary endpoints in Phase 3 oncology trials. However, current sample size calculation tools struggle to account for correlation between these endpoints, leading to adoption of a conservative approach assuming their independence. Further, subjective and arbitrary decision on the alpha split between the dual primary endpoints may not be optimal given extent of correlation and additional trial design features. We propose a simulation framework for this setting, inducing correlation using the Copula method and the Moran Downtown model. This will be tested across scenarios with different sample sizes, alpha splits, and correlation levels. Our goal is to assess trial characteristics like power and type I error, providing a framework for generating correlated PFS and OS endpoints. By simulating correlated endpoints, we aim to offer a more accurate representation of their relationship, leading to better-informed trial designs with more precise sample size. This approach will help optimize alpha allocation to maintain desired power, ultimately enhancing the design, analysis and duration of oncology trials.
Keywords
Oncology
Clinical Trials
Progression Free Survival
Overall Survival
Dual primary endpoints
We consider the goal of selecting the population with the largest mean among k normal populations when variances are not known. We propose a Stein-type two-sample procedure, denoted by R, for selecting a nonempty random-size subset of size at most m (0 < m < k ) that contains the population associated with the largest mean, with a guaranteed minimum probability P*, whenever the distance between the largest mean and the second largest mean is at least d, where m, P*, and d are specified in advance of the experiment. The probability of a correct selection and the expected subset size of R are derived. Critical values/procedure parameters that are required for certain k, m, P*, and d are obtained by solving simultaneous integral equations and are presented in tables.
Keywords
Expected Subset Size
Probability of A Correct Selection
Ranking and Selection
Restricted Subset Selection
Sample size re-estimation (SSR) at an interim analysis allows for adjustments based on accrued data. Here, we propose an approach that uses partially unblinded SSR methods for binary and continuous outcomes. Although this approach has operational unblinding, its partial use of unblinded information for SSR does not include the interim effect size, hence the term "partially unblinded." Through proof-of-concept and simulation studies, we demonstrate that these adjustments can be made without compromising the Type I error rate. We also investigate different mathematical expressions for SSR under different variance scenarios: homogeneity, heterogeneity and a combination of both. Of particular interest is the third form of dual variance, the use of which for binary outcomes has additional clarifications, and for which we derive an analogous form for continuous outcomes. We show that the corresponding mathematical expressions for the dual variance method are a compromise between those for variance homogeneity and heterogeneity, resulting in sample size estimates that are bounded between those produced by the other expressions, and extend their applicability to adaptive trial design.
Keywords
Adaptive design
Interim analysis
Sample size adjustment
Type I error preservation
Unequal treatment allocation
Variance heterogeneity
Sample Size Re-Estimation (SSRE) is examined for comparison of two binomial distributions in a trial that includes one interim analysis (IA) for possible SSRE. This is the only goal, so there is no hypothesis test at the IA. The procedure for SSRE uses conditional power using asymptotic normality and the impact of no hypothesis test is examined using the related procedure for SSRE with hypothesis tests at the IA for both futility and superiority. The study that motivated this research was designed for SSRE only and is presented as an example of an application of this procedure.
Keywords
Sample Size Re-Estimation
Binomial Distributions
In cardiovascular outcome trials (CVOT), multiple types of events, including recurrent cardiovascular events and fatal events, are often of interest. The time to first event is usually chosen as the primary endpoint. To reflect the total burden of disease, some CVOTs may consider total number of composite events as the primary endpoint instead of time to first event. However, use of total events as the primary endpoint may complicate the study design particularly when fatal events are included.
This project conducted simulation studies to explore how different design parameters (e.g., overdispersion and fatal events proportion) impact adaptive design strategy when the total number of events (recurrent and fatal events) is the primary endpoint. The patient-level number of events within a certain period was generated by a Poisson-Gamma mixture framework. A joint-frailty setting for event rates was used to incorporate the correlation between recurrent and fatal events. As a result, more information is included in planned interim data in higher overdispersion scenarios. To account for the true information fraction used at interim, a boundary adjustment at final analysis is proposed.
Keywords
CVOT
Recurrent Events
Adaptive Design
Overdispersion
Co-Author(s)
Leiwen Gao, Amgen Inc.
Anna McGlothlin, Berry Consultants
Todd Graves, Berry Consultants LLC
Elizabeth Lorenzi, Berry Consultants, LLC
Qing Liu, Amgen Inc.
Huei Wang, Amgen, Inc.
First Author
You Wu, Amgen, Inc
Presenting Author
You Wu, Amgen, Inc