Novel Statistical Methods to Avoid Failed Trials

Guangyu Tong Chair
Yale University
 
Judith Lok Organizer
Boston University
 
Wednesday, Aug 7: 10:30 AM - 12:20 PM
1711 
Topic-Contributed Paper Session 
Oregon Convention Center 
Room: CC-255 
Even when executed well, it is not rare for confirmatory randomized trials to conclude there is insufficient evidence to detect a causal effect of the intervention compared to control. Recently, several statistical methods are being developed to avoid such failed trials. For example, as in Type 2 Hybrid effectiveness implementation designs, effectiveness and implementation are tested and assessed simultaneously. Or, the treatment allocation can be adaptively adjusted over the course of the trial. Or, covariate adjustment can be used to increase power without jeopardizing robustness: the type-1 error is not increased even if the adjustment model assumptions are not met. Or, as in SMART or Sequential Multiple Assignment Randomized Trials, Micro Randomized Trials, and Hybrid Experimental Designs, the intervention is randomized over time depending on a patient's own past outcomes. Or, as in LAGO or Learn-As-you-GO designs, the intervention component composition of a multi-component intervention is optimized while the trial is ongoing, and data from all phases is used to both test the null hypothesis of no intervention effect and to estimate the optimal intervention and its effect. All these strategies have a common goal: to avoid failed trials and find effective interventions.

Applied

Yes

Main Sponsor

ENAR

Co Sponsors

Biopharmaceutical Section
Caucus for Women in Statistics

Presentations

Adaptive Neyman Allocation in Sequential Trials: An Online Optimization Perspective

In this talk, I present our recent work on the problem of Adaptive Neyman Allocation, where the experimenter seeks to construct an adaptive design which is nearly as efficient as the optimal (but infeasible) non-adaptive Neyman design which has access to all potential outcomes. I will show that the experimental design problem is equivalent to an adversarial online convex optimization problem, suggesting that any solution must exhibit some amount of algorithmic sophistication. Next, I present Clip-OGD, an experimental design that combines the online gradient descent principle with a new time-varying probability-clipping technique. I will show that the Neyman variance is attained in large samples by showing that the expected regret of the online optimization problem is bounded by O(\sqrt{T}), up to sub-polynomial factors. Even though the design is adaptive, we construct a consistent (conservative) estimator for the variance, which facilitates the development of valid confidence intervals. I will conclude with recent progress on extending this work to covariate-adjusted estimators and covariate-responsive designs, which is made possible through the online optimization perspective. 

Speaker

Christopher Harshaw

Balanced and Robust Randomized Treatment Assignments: The Finite Selection Model

The Finite Selection Model (FSM) was developed in the 1970s to design the RAND Health Insurance Experiment (HIE), one of the largest social science experiments conducted in the U.S. The idea behind the FSM is that each treatment group takes turns selecting units in a fair and random order to optimize a common criterion. At each of its turns, a treatment group selects the available unit that maximally improves the combined quality of its resulting group of units in terms of the criterion. In the HIE and beyond, we revisit, formalize, and extend the FSM as a general tool for balanced and efficient experimental design with multiple treatments. Leveraging the idea of D-optimality, we propose and analyze a new selection criterion in the FSM. The FSM using the D-optimal selection function has no tuning parameters, is affine invariant, and when appropriate, retrieves several classical designs such as randomized block and matched-pair designs. We demonstrate FSM's performance in a case study based on the HIE and in ten randomized studies from the health and social sciences. We recommend the FSM be considered in experimental design for its conceptual simplicity, efficiency, and robustness. 

Speaker

Ambarish Chattopadhyay, Stanford University

Learn-As-you-GO (LAGO) to adapt the intervention in an ongoing trial in the presence of center effects

Learn-As-you-GO (LAGO) trials optimize the intervention component composition of a multi-component intervention while the trial is ongoing, and the final analysis uses trial data from all stages. The primary purpose of LAGO adaptations is to avoid failed trials, which is especially important if pre-trial expectations are not met. The second purpose is to optimize the intervention, so that it achieves a pre-specified goal, often while minimizing cost. In LAGO trials, the observations from different trial stages are not independent, because the interventions in later stages depend on previous stages' outcomes. Hence, standard statistical methods cannot be used to prove consistency of the intervention effect estimators. Therefore, in LAGO trials learning is based on summary measures. I will show that with fixed center effects, estimators based on LAGO trial data are consistent and asymptotically normal, and the null hypothesis of no effect of any of the intervention components can be tested using LAGO trial data.I will illustrate LAGO with PULESA, a clinical trial in Uganda aiming to improve blood pressure management in HIV infected patients. 

Co-Author(s)

Ante Bing
Donna Spiegelman, Yale School of Public Health

Speaker

Judith Lok, Boston University

Model-Robust Inference for Clinical Trials that Improve Precision by Stratified Randomization and Covariate Adjustment

Two commonly used methods for improving precision and power in clinical trials are stratified randomization and covariate adjustment. However, many trials do not fully capitalize on the combined precision gains from these two methods, which can lead to wasted resources in terms of sample size and trial duration. We derive consistency and asymptotic normality of model-robust estimators that combine these two methods, and show that these estimators can lead to substantial gains in precision and power. Our theorems cover a class of estimators that handle continuous, binary, and time-to-event outcomes; missing outcomes under the missing at random assumption are handled as well. For each estimator, we give a formula for a consistent variance estimator that is model-robust and that fully captures variance reductions from stratified randomization and covariate adjustment. Also, we give the first proof (to the best of our knowledge) of consistency and asymptotic normality of the Kaplan-Meier estimator under stratified randomization, and we derive its asymptotic variance. The above results also hold for the biased-coin covariate-adaptive design. Our results are demonstrated via three RCTs. 

Speaker

Bingkai Wang, Johns Hopkins University, Bloomberg School of Public Health

The Design of Hybrid Type 2 Studies

Hybrid type 2 designs enable the concurrent examination of both effectiveness and implementation outcomes, emphasizing equal importance of these two outcomes. However, these designs pose statistical challenges, especially in cluster randomized trials (CRTs). Standard methods for powering studies can be inadequate in this context. This work explores methodologies for validly powering hybrid type 2 studies in a CRT setting. A literature search revealed 18 relevant publications, identifying five methods, two of which are extended here to address clustering—the combined outcomes approach and single 1-degree of freedom combined test. We describe and illustrate procedures for powering studies using these methods, drawing inspiration from a Chicago Implementation Research Center (CIRCL) study on blood pressure control and reach. The conjunctive test's hypothesis setup aligns with the research goals, yielding lower sample size requirements compared to popular p-value adjustment methods. 

Speaker

Melody Owen, Yale University