Wednesday, Aug 6: 2:00 PM - 3:50 PM
4193
Contributed Papers
Music City Center
Room: CC-103A
Main Sponsor
Biopharmaceutical Section
Presentations
An important practical aspect of a clinical trial with analyses planned at pre-specified event counts, is prediction of the times of these specified landmark events such as the 50th event or the 100th event using the accumulated data from the trial itself. Currently available model-based methods use a common failure time model for all the patients in a treatment arm and predict the future failure times of the patients on study and patients yet to be enrolled. In the present work, we consider a scenario where the failure time depends on some important covariates at baseline such as gender, age, gene expression status and so on. We build a regression model introducing the covariates through the parameters of the failure time distributions. As our methods are based on predictive distributions of future failure times which are not available in closed form, we use Markov chain Monte Carlo (MCMC) methods to simulate from the predictive distribution. We demonstrate our methods with simulated data sets.
Keywords
enrollment prediction
clinical trials
forecasting
Chemoradiation for solid tumors in the thorax region targets rapidly dividing cells, including cancer cells but also immune population. Severe radiation-induced immunosuppression impairs effective immune responses against pathogens and cancer recurrence.
Unraveling the complex relationships between treatment, intermediate endpoints (TTP/PFS/DFS), and survival is crucial to translating scientific advances into therapeutic strategies. We developed a novel Bayesian multi-state mediation modeling framework to evaluate direct and indirect treatment effects on survival, which (1) explicitly incorporates intermediate time-to-failure outcomes observed post-treatment response, which are typically neglected in conventional survival analyses; (2) leverages Bayesian estimation and variable selection techniques to enhance model reliability and address uncertainty in parameters.
The method was applied to a study of esophageal cancer patients receiving photon (IMRT) vs proton (PBT) therapy to elucidate the impact of severe lymphopenia. Survival benefit in PBT group was shown to be attributable (mediated proportion 35%) to reduced immunosuppression (22.0% vs. 42.7%, respectively; P < 0.001).
Keywords
Bayesian
Multistate model
Oncology
Clinical trials
Surrogate markers
Mediation analysis
Co-Author(s)
Jie Zhou, Neuroscience Biostatistics, Novartis Pharmaceutical Cooperation, East Hanover, New Jersey, USA
Peng Wei, University of Texas, MD Anderson Cancer Center
Qing Liu, Amgen
Xun Jiang, Amgen
Amy Xia, Amgen
Steven Lin, Division of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, U
Radhe Mohan, Division of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, U
Brian Hobbs, University of Texas
First Author
Yiqing Chen
Presenting Author
Yiqing Chen
In molecular targeted therapy drug development, biomarkers are often incorporated into clinical trial designs. Recently, there has been much discussion on the challenges and advantages of this approach. Identifying subpopulations of patients who benefit from new treatments based on biomarker expression can facilitate smoother drug development.
Morita et al. (2014) proposed a Bayesian phase II trial design to identify subpopulations with high treatment efficacy based on biomarker expression. To reduce the required sample size, the design was later extended to allow stepwise determination of treatment effectiveness or ineffectiveness for each subpopulation (Sugitani et al., 2023). However, both approaches use the hazard ratio as the primary endpoint and are not applicable when the proportional hazards assumption does not hold.
To address this limitation, we extend Morita et al.'s Bayesian clinical trial design by incorporating the restricted mean survival time (RMST) as the primary endpoint. Since RMST does not require the proportional hazards assumption, our proposed approach extends the applicability of biomarker-based Bayesian phase II clinical trials.
Keywords
Bayesian clinical trial design
Phase II trial
Subpopulation identification
Restricted mean survival time
Survival analysis
First Author
Akiyoshi Nakakura, Center for Clinical and Translational Research, Kyushu University Hospital
Presenting Author
Akiyoshi Nakakura, Center for Clinical and Translational Research, Kyushu University Hospital
Group sequential design is commonly used in lengthy clinical trials with predefined interim analyses. These analyses assess current trial data for potential early stopping based on efficacy, futility, or study modifications.. However, the reliability of interim analysis (IA) results and the use of partial data to make consistent decisions across subsequent IAs and the final analysis (FA) are important considerations. Instead of making decisions based solely on the snapshot of IA data, we will apply the empirical Bayesian (EB) approach to enhance our belief in the IA results and use other Bayesian approaches to make more robust estimates for the next IA/FA. Bayesian simulations utilize prior information from data collected before the IA to better estimate treatment effect for subsequent IAs and the FA. We will examine the impact of different trajectories of treatment effect changes on the IA and FA results. Furthermore, we will compare results from Bayesian and traditional frequentist approaches under scenarios such as no treatment change over time and various types of treatment changes over time.
Keywords
Interim
Bayesian
For clinical studies with multiple clinical endpoints that could contribute to the risk-benefit profile of the product evaluation, it would be desirable to monitor those primary endpoints simultaneously at interim analysis. Dmitrienko and Wang (2006) introduced Bayesian predictive probability which is used for interim decisions including efficacy and futility stopping rules. In this paper, we will expand the application of Bayesian predictive probability to the cases with multiple primary endpoints.
The Bayesian predictive probability is defined as the probability of successful outcomes at the planned completion of the study conditional on the observed data up to the time at interim analysis and predicted data. We consider the case with multiple primary endpoints which are assumed to have a multivariate normal distribution with different mean vectors in each treatment group, and the mean vector has a multivariate normal prior. In this case, the generalized predictive probability (GPP) can be calculated using multivariate normal function in SAS or R program. Some examples will be presented to show how to calculate GPP and make interim decisions.
Keywords
Predictive Probability
Predictive distribution
Co-primary endpoints
Interim Analysis
Stopping rules
External controls from historical trials or observational data can enhance randomized controlled trials (RCTs) when large-scale randomization is impractical or unethical, such as in rare disease drug evaluations. However, non-randomized external controls may introduce biases, and existing Bayesian and frequentist methods can inflate type I error rates, especially in small-sample trials where borrowing is most needed. To address this, we propose a randomization inference framework that ensures finite-sample exact and model-free type I error control, adhering to the "analyze as you randomize" principle to mitigate hidden biases. Since biased external controls reduce randomization test power, we leverage conformal inference to develop an individualized test-then-pool procedure that selectively borrows comparable controls. Our approach accounts for selection uncertainty, providing valid post-selection inference. We also introduce an adaptive procedure to optimize selection by minimizing mean squared error. The methods are validated through theory, simulations, and a lung cancer trial with external controls.
Keywords
causal inference
data fusion
randomization test
real-world data and evidence
small sample size
Co-Author(s)
Shu Yang, North Carolina State University, Department of Statistics
Xiaofei Wang, Duke University Medical Center
First Author
Ke Zhu, NCSU and Duke
Presenting Author
Ke Zhu, NCSU and Duke
We describe a Bayesian approach for sample size estimation for multi-arm randomized controlled trials with response adaptive randomization (RAR). Assuming normally distributed treatment effects and unknown but common variance, this design utilizes outcome data to estimate posterior distributions of parameters, modifies allocation to favor effective treatments, and re-estimates the number of participants. The sample size should be sufficient to show that at least one group difference is greater than 0 (success), or that all effect sizes are smaller than a desired threshold (futility) at prespecified thresholds. Using simulations, sample sizes are calculated using a Bayesian approach: [1] without interim analysis; [2] with interim analyses and with and without RAR; [3] based on hypothesis testing. We show that two interim analyses, conducted when outcomes are available among 25% and 50% of participants, could result in fewer participants with slightly higher number needed when incorporating RAR. The ethical benefits of allocating more patients to favorable arms with larger sample size requirements should be considered against the efficiency of equal group allocation in RAR trials.
Keywords
Adaptive Designs
Power and Sample Size
Trial Design
Response adaptive randomization