Wednesday, Aug 6: 10:30 AM - 12:20 PM
4179
Contributed Papers
Music City Center
Room: CC-207D
Main Sponsor
Biopharmaceutical Section
Presentations
Equal randomization ratio (1:1) is the most common ratio used in confirmatory clinical trials. There have been discussions on unequal randomization that it could be favored over equal randomization for many reasons including encouraging trial recruitment, reducing costs, and estimates in the treatment arm being more robust. However, unequal randomization is still rarely applied in trial design despite the benefits, as the challenge remains that there is no method to determine the optimal randomization ratio to achieve the best outcomes that balance many considerations of a trial. To address this issue, we developed a optimization framework that determines the optimal randomization ratio which maximizes a trial's probability of success and expected net value of a trial, two major concerns while designing a trial, based on trial parameters such as prior knowledge of efficacy, sample size, budget and cost. The proposed method is evaluated with simulations and a hypothetical trial. The simulation results show that how optimal randomization ratio changes with different input of trial parameters, and it can successfully reduce the cost while maintaining a high probability of success.
Keywords
Randomization
Probability of success
Optimization
Trial Design
Oncology
Survival Analysis
Co-Author(s)
Weijia Mai, Duke University School of Medicine Dept. of Biostatistics & Bioinformation
Yuanyuan Han, Bristol Myers Squibb
First Author
Luoying Yang, Bristol-Myers Squibb
Presenting Author
Luoying Yang, Bristol-Myers Squibb
Cryptococcal meningitis (CM) is an infection of the brain that causes over 100,000 HIV-related deaths each year. In clinical studies that aim to evaluate treatment efficacy for CM, early fungicidal activity (EFA) during the first 2 weeks of therapy has been used as a standard measure of the rate of Cryptococcus clearance from longitudinally measured cerebrospinal fluid. In the CM literature, EFA has been estimated using simple linear regression (SLR). However, recent studies have also utilized linear mixed models (LMM) to estimate EFA. While the two models produce quite different estimates in the literature, there has not been a systematic comparison between the approaches. To address this, we conducted a series of simulations to empirically assess the performance of each model under various scenarios. We also compare the two models using real data from CM Phase II Clinical Trial ENACT. Our analysis found that the use of LMM for EFA estimation may produce a biased estimate, especially for subjects who achieved sterility faster. However, when comparing the treatment difference across two arms, the LMM is more efficient than the SLR approach in scenarios with presence of outliers.
Keywords
HIV
cryptococcal meningitis
early fungicidal activity
clinical trial
longitudinal data
infectious disease
We propose a sample size estimation method based on the value function for multi-arm Sequential Multiple-Assignment Randomized Trials (SMARTs) where the goal is to estimate an optimal dynamic treatment regime (DTR) or treatment rule. Despite their increasing adoption in the last decade, recent systematic reviews revealed that sample size calculations often do not consider heterogeneous treatment effects even when the primary aim concerns finding the best treatment based on an individual's covariates. We evaluate the proposed method through simulation studies and demonstrate the importance of considering both the magnitude of differences in conditional treatment effects and the prevalence of tailoring covariates in the target population for determining sample size. We discuss the motivating example, the Biomarkers for Evaluating Spine Treatments (BEST) Trial, a SMART investigating four chronic low back pain modalities to inform a precision medicine approach to cLBP treatment.
Keywords
Experimental Design
Precision Medicine
Dynamic Treatment Regimes
Off-policy Estimation
Causal Inference
Decision-making
Sequential multiple assignment randomized trials mimic the actual treatment processes experienced by physicians and patients in clinical settings and inform the comparative effectiveness of dynamic treatment regimes. In such trials, patients go through multiple stages of treatment, and the treatment assignment is adapted over time based on individual patient characteristics such as disease status and treatment history. In this work, we develop and evaluate statistically valid interim monitoring approaches to allow for early termination of sequential multiple assignment randomized trials for efficacy regarding survival outcomes. We propose a weighted log-rank Chi-square statistic to account for overlapping treatment paths and quantify how the log-rank statistics at two different analysis points are correlated. Efficacy boundaries at multiple interim analyses can then be established using the Pocock, O'Brien Fleming, and Lan Demets boundaries. We run extensive simulations to evaluate and compare the type I error and power of our proposed method with that of an existing statistic. The methods are demonstrated via an analysis of a neuroblastoma study dataset.
Keywords
Log-rank Statistics
Dynamic Treatment Regimes
Interim Monitoring
Efficacy Boundaries
Inverse Probability Weighting
Trial Efficiency
Co-Author(s)
Yu Cheng, University of Pittsburgh
Abdus Wahed, University of Rochester
First Author
Zi Wang, University of Pittsburgh
Presenting Author
Zi Wang, University of Pittsburgh
Phase II single-arm trials with binary endpoints often use Simon's two-stage minimax and optimal designs. These designs are derived by first identifying feasible solutions constrained by type I and II error rates. The minimax design minimizes total sample size, while the optimal design minimizes expected sample size under the null response rate. However, because they do not explicitly optimize error rates, their estimated values often deviate from the targets. To address this limitation, we propose the Pareto optimal design, which applies multi-objective optimization (MOO) to generate a Pareto frontier, improving alignment between estimated and desired error rates. This approach also enhances the probability of early termination when the null response rate holds. Furthermore, we demonstrate the use of a genetic algorithm (GA)-based MOO framework to efficiently identify Pareto-optimal designs.
Keywords
Phase II
Simon's two-stage design
Pareto frontier
Multi-objective optimization
Early-phase clinical trials face the challenge of selecting optimal drug doses that balance safety and efficacy due to uncertain dose-response relationships and varied participant characteristics. Traditional randomized dose allocation often exposes participants to sub-optimal doses by not considering individual covariates, leading to suboptimal dosing, larger sample sizes, and longer trials. This paper introduces a risk-inclusive contextual bandit algorithm leveraging multi-arm bandit (MAB) strategies to optimize dosing using participant-specific data. The algorithm improves dose allocation balance by integrating separate Thompson samplers for efficacy and safety. Effect sizes are estimated robustly with a generalized version of the asymptotic confidence sequence (AsympCS) method (Waudby-Smith et al., 2024), ensuring uniform coverage for effect sizes over time. AsympCS validity is also established in the MAB framework. Empirical results show the method outperforms randomized and efficacy-focused Thompson samplers, with real-data application from a Phase IIb study aligning with actual findings.
Keywords
Anytime-valid policy evaluation
Dose-ranging studies
Efficacy and Safety
Model-assisted inference
Sequential causal inference
Traditional early-phase dose-finding methods rely solely on toxicity to select the MTD. These methods can be insufficient for targeted therapies, which may not always exhibit a monotonically increasing dose-efficacy curve. FDA's Project Optimus advocates selecting safe and efficacious doses, aiming to identify the OBD, thus maximizing the risk-benefit tradeoff. Many model-assisted designs, such as BOIN12, BOIN-ET, and STEIN, have been proposed for this purpose. However, these designs only use binary toxicity and efficacy outcomes, leading to significant information loss in the dosing decision-making process. To tackle this, a normalized equivalent toxicity score was proposed to treat toxicity as a quasi-continuous variable. We propose a new approach integrating the NETS with the STEIN design while incorporating a Gaussian-distributed efficacy to evaluate potential doses. Extensive simulation studies show that the proposed design improves trial efficiency compared to other existing designs by: 1) improving OBD selection rates with better patient allocation and 2) exhibiting higher probabilities of early trial termination with smaller sample sizes due to futility or over-toxicity.
Keywords
Normalized Equivalent Toxicity Score
Bayesian Adaptive Design
Dose Finding
Phase I/II Clinical Trials