Statistical Issues and Advances in Modern Trial Designs

Sarah Weinstein Chair
University of Pennsylvania
 
Monday, Aug 5: 8:30 AM - 10:20 AM
5033 
Contributed Papers 
Oregon Convention Center 
Room: CC-E148 

Main Sponsor

Biometrics Section

Presentations

Bayesian Adaptive Randomization for the I-SPY 2 SMART

The I-SPY 2 sequential multiple assignment randomized trial (SMART) is designed to identify optimal treatment regimes for breast cancer. This design has three stages meant to mimic the decision-making process for sequential treatment. Subjects begin the trial with a first stage randomization to available arms. If interim information on the tumor looks promising, they can go to surgery after the first stage and their pathological complete response (pCR) status is assessed. Otherwise, they proceed to a second stage of randomization. If they do not go to surgery at the second stage, they receive a third stage of treatment then go to surgery. Due to many possible treatment regimes and the desire to improve outcomes in the trial, response-adaptive randomization provides potential statistical and ethical benefits over uniform randomization. We present a Bayesian adaptive randomization scheme known as Thompson sampling that randomizes subjects to arms based on the posterior probability that they maximize the chance of a pCR. Simulation studies show our method improves in-trial pCR rates and identifies optimal regimes at similar rates as uniform randomization. 

Keywords

SMART

adaptive randomization

clinical trials

precision medicine

sequential decision making

multi-armed bandits 

View Abstract 3339

Co-Author(s)

Marie Davidian, North Carolina State University
Christina Yau, UCSF
Anastasios Tsiatis, North Carolina State Univ
Denise Wolf, UCSF

First Author

Peter Norwood

Presenting Author

Peter Norwood

Comparing cross-sectional stepped-wedge and cluster randomized trials with time-to-event endpoints

Stepped wedge cluster randomized trials (SW-CRTs) are a form of trial whereby clusters are progressively transitioned from control to intervention, and the timing of transition is randomized for each cluster. SW-CRTs can be attractive when it is difficult to simultaneously implement the intervention in an adequate number of clusters to facilitate a parallel design; however, they pose their own logistical and analytic challenges that may make a traditional cluster randomized trial (CRT) a more feasible option. It is not always clear which design is better suited. In addition, SW-CRTs with continuous or binary endpoints often have larger power compared to parallel CRTs; it is unclear whether this observation holds with time-to-event endpoints, where pragmatic trial interest is growing. In this talk, we will compare the operating characteristics of cross-sectional parallel CRTs and SW-CRTs with time-to-event endpoints, testing either cluster- or individual-level interventions. We will also explore two paradigms of cross-sectionality for SW-CRTs: events that may be observed beyond the period of study entry, and events that are administratively censored at the end of a period. 

Keywords

pragmatic trial

intracluster correlation

trial design

design choice

clustering 

View Abstract 3021

First Author

Mary Ryan, University of Wisconsin-Madison

Presenting Author

Mary Ryan, University of Wisconsin-Madison

Comparison of the Efficiency of Robust Estimators for the Nested Case-Control Design

The nested case-control (NCC) design is often used to reduce data collection costs in rare disease settings where resources are limited. The NCC sampling scheme randomly samples a small number of controls from the risk set at each event time. Samuelsen (1997) proposed a pseudolikelihood-based approach for estimation of model coefficients when using the NCC sampling scheme. This estimator allows sampled controls to enter all risk sets for which they are at risk and reweights contributions to account for biased sampling. Nuño and Gillen (2022) found that under model misspecification, the standard estimator proposed by Thomas (1977) estimates a different quantity that depends on the number of controls sampled at each event time. To account for this, they proposed an estimator based on the missing data framework, which imputes covariate values of subjects at risk in the full cohort who are not sampled under the NCC design. The Sameulsen estimator, while not developed for this purpose, also has this benefit. The current work compares the efficiency of the two estimators under various settings and provides considerations for the use of each estimator. 

Keywords

nested case-control design

efficient sampling designs

time-to-event data 

View Abstract 3645

First Author

Michelle Nuno

Presenting Author

Michelle Nuno

Handling Missing Outcome Data in Cluster Randomized Trials with Both Individual and Cluster Dropout

Missing outcome data are common in cluster randomized trials (CRTs) and can occur due to dropout of individuals, termed "sporadically" missing data, or dropout of clusters, termed "systematically" missing data. Multilevel multiple imputation (MI) methods that handle hierarchical data have been developed. However, application of these methods to CRTs is limited. We examined the performance of four multilevel multiple imputation (MI) methods to handle sporadically and systematically missing CRT outcome data via a simulation study. Our findings showed that one multilevel MI method outperformed the others under various scenarios. Using the best performing MI method, we developed methods for conducting sensitivity analysis to test the robustness of inferences under different missing not at random (MNAR) assumptions. The methods allow for different MNAR assumptions for cluster dropout and individual dropout to reflect that they may arise from different missing data mechanisms. Our methods are illustrated using a real data application. The findings lead to recommendations of approaches for handling missingness in cluster randomized trials. 

Keywords

clustered data

missing data

MNAR

multiple imputation

systematically missing 

View Abstract 1876

Co-Author(s)

Beth Glenn, UCLA
Roshan Bastani, UCLA
Catherine Crespi, University of California Los Angeles, Department of Biostatistics

First Author

Analissa Avila, UCLA

Presenting Author

Analissa Avila, UCLA

Robust covariate adjustment for randomized clinical trials when covariates are subject to missingness

In randomized clinical trials, often the primary goal is to estimate the treatment effect. Robust covariate adjustment is a preferred statistical method since it improves efficiency and is robust to model misspecification. However, it is still underutilized in practice. One practical challenge is the missing covariates. Though missing covariates have been studied extensively, most of the existing work focuses on the relationship between outcome and covariates, with little on robust covariate adjustment for estimating treatment effect when covariates are missing. In this article, we recognize that the usual robust covariate adjustment could be directly generalized to the scenario when covariates are missing with the additional assumption that missingness is independent of treatment assignment. We also propose three different implementation strategies in order to handle the increased dimensionality in working models caused by missingness. Simulations and data application demonstrate the performance of proposed strategies. Practical recommendations are presented in the discussion. 

Keywords

imputation

missing covariates

randomized clinical trials

robust covariate adjustment 

Abstracts


Co-Author

Min Zhang, Tsinghua University

First Author

Jiaheng Xie

Presenting Author

Jiaheng Xie

SAS macros for group sequential designs for survival endpoint with non-proportional hazards

Group sequential designs (GSD) allow sequential monitoring of efficacy and safety as part of interim testing in clinical trials. Although literature is well developed for GSD with continuous and binary endpoints, options are restrictive when dealing with time-to-event endpoints. Commercial software provides options for executing GSD only using the proportional hazards or with assumption of exponentially distributed survival time. We have developed a novel simulation-based GSD for the non-proportional hazards scenario utilizing the concept of Relative Time. We present two new SAS macros that can execute such GSDs – {i} with both efficacy and futility boundaries, {ii} with efficacy only boundary. Our SAS macros provide many advanced features – binding/non-binding futility rules, skipping for futility, flexible error spending, equal/non-equal spaced interim looks, and allow administrative censoring and dropouts. A user-friendly output (both numeric and graphic) displays sample size calculation and the efficacy/futility cut-off boundaries. The macros also generate three dimensional plots showing expected reduction in sample size when using GSD compared to usual fixed two-arm trials. 

Keywords

group sequential design

non-proportional hazards

SAS macro

time-to-event endpoint

sample size

Relative Time 

View Abstract 3833

First Author

Milind Phadnis, University of Kansas Medical Center

Presenting Author

Milind Phadnis, University of Kansas Medical Center