Innovative Approaches to Clinical Trial Design and Analysis

Melissa Spann Chair
Cytel Inc.
 
Thursday, Aug 7: 8:30 AM - 10:20 AM
4209 
Contributed Papers 
Music City Center 
Room: CC-211 

Main Sponsor

Biopharmaceutical Section

Presentations

Evaluating Innovative Composite Scoring methods to Optimize Power and Sample Size in Clinical Trials

Clinical trials for complex diseases often use a single primary endpoint, which may overlook the multifaceted nature of complex diseases. As per currently used conventional approaches, if multiple endpoints are assessed in a trial, stringent multiplicity adjustments are required which may inflate sample sizes, trial duration, and costs. A possible solution is to use composite scores. We propose new composite scoring methods with normalized and binary components and compare them with traditional univariate, multivariate, and equally weighted composite scoring approaches. We examine different weighting schemes based on component variability and correlations, identifying scenarios where certain composite scores perform better. Using simulations based on the Assessment of Weekly Administration of Dulaglutide in Diabetes (AWARD) studies, we provide empirical evidence on the best use cases for composite endpoints using type 2 diabetes clinical trials as an example. Our findings offer guidelines for choosing composite score methods that provide power gains and reduce sample size, facilitating better decision-making in trials with multiple outcomes. 

Keywords

composite score

power

sample size

clinical trials

multiplicity 

Co-Author

Shesh N. Rai, Biostats, Health Inform & Data Sci | College of Medicine

First Author

Rachana Lele, Biostatistician II, Syneos Health

Presenting Author

Rachana Lele, Biostatistician II, Syneos Health

Exact power and sample size in clinical trials with two co-primary binary endpoints

Although clinical trials in many therapeutic areas evaluate a single binary endpoint as the primary endpoint, clinical trials in certain therapeutic areas require the use of two co-primary binary endpoints to evaluate treatment benefit multi-dimensionally. When designing clinical trials with two co-primary binary endpoints, considering the correlation between the two endpoints can increase the power of the trial and can consequently reduce the required sample size, leading to improved trial efficiency. For this study, we derive formulae for calculating the exact power and sample size in clinical trials with two co-primary binary endpoints. The proposed formulae are useful for any statistical test for binary endpoints. Numerical investigation under various scenarios showed that our proposed formulae can incorporate consideration of the correlation between two co-primary binary endpoints in the sample size calculation, thereby allowing the required sample size to be reduced. We also demonstrate that the exact power for the required sample size calculated using our proposed formula is approximately equal to a target power. 

Keywords

binary endpoint

co-primary endpoints

correlation

exact

power

sample size 

Co-Author

Takuma Yoshida, Kagoshima University

First Author

Gosuke Homma, Astellas Pharma Inc

Presenting Author

Gosuke Homma, Astellas Pharma Inc

Justifying the sample size for a factorial trial

Factorial trials continue to grow in popularity as a method to test multiple combinations of intervention components simultaneously in a single randomized trial, but there is still a lack of clarity around how to determine a sufficient sample size. Part of this confusion stems from the fact that study teams conduct factorial trials for different reasons. In this talk, we will consider three research questions that could motivate a factorial trial: 1) identifying intervention components that have a statistically significant effect on the outcome; 2) identifying statistically significant interactions between intervention components; 3) determining which combination of components is most likely to have an optimal effect on the outcome. For each of these potential research questions, we discuss how to approach a sample size justification/power analysis. We show that studies that are powered to address research question 1 are sufficient to reach decisions within 10% of the optimal combination in answer to research question 3, but that addressing research question 2 can require considerably larger sample sizes. We introduce an R shiny package that assists with these calculations. 

Keywords

factorial trial

power analysis 

Co-Author(s)

Alex Dahlen, New York University, School of Global Public Health
Jillian Strayhorn, New York University School of Global Public Health
Ruoxiang Zheng, New York University School of Global Public Health

First Author

Phuc Vu

Presenting Author

Phuc Vu

Weighted Upstrap Futility Monitoring: Algorithmically Accounting for Data Driven Time Trends

Futility monitoring is essential in clinical trial design to allow early termination for treatment inefficacy. Due to time varying patterns in relative risk it is relevant to consider time trends. Current futility analysis methods are not designed to identify time trends. We propose weighted upstrapping as a solution. Weighted upstraping involves assigning a time dependent weight to all observations and repeatedly sampling from the interim data to simulate thousands of fully enrolled trials. A p-value is calculated for each upstrapped dataset and the proportion of upstrapped trials meeting a significance criterion is compared to a decision threshold to determine futility. We implemented a simulation study with varying sample sizes and relative risk trends, for both null and alternative cases. We applied upstrapped futility designs as well as traditional group sequential designs for comparison. Weighted upstrapping more accurately identified futility for non-constant relative risk trends. Weighted upstrap designs were 7.1% more likely than group sequential designs to stop in the non-constant relative risk null setting and 2.6% less likely to stop in the equivalent alternative case. 

Keywords

Clinical trials

Interim futility monitoring

Weighted upstrap

Time trends

Nonparametric

Alpha-spending 

Co-Author

Alexander Kaizer, University of Colorado Anschutz Medical Campus

First Author

Jess Wild

Presenting Author

Jess Wild

Overview of statistical methods for binary endpoint in immunology clinical trials

In immunology clinical trials, the primary endpoint to evaluate the drug efficacy comparing to placebo is usually the binary endpoint. Odds ratio, rate difference and rate ratio are three most popular measures to analyze binary endpoint. Which measure to choose for binary endpoint is not only a clinical question but also a statistical question. As odds ratio has good statistical properties, CMH test or logistic regression have been widely used to derive adjusted odds ratio and the corresponding p-value in the immunology clinical trials for binary endpoints. In this work, population-level summary and covariate adjustment for unconditional treatment effect of binary endpoint have been discussed based on recent FDA's guidance, and a comprehensive review through different clinical trials has been conducted and the methods used for binary endpoints in published trials have been summarized. The performance of commonly used statistical methods for binary endpoint has been evaluated with simulations. The recommendations for analysis methods of binary endpoints will be given in the end. 

Keywords

Immunology clinical trial

binary endpoints

CMH analysis

odds ratio

rate difference 

Co-Author(s)

Ning Li, Sanofi
Xiaomei Liao, Sanofi

First Author

Pascal Minini, Sanofi

Presenting Author

Xiaomei Liao, Sanofi

Sensitivity in Sample Size Determination in Cluster Randomized Trials for Count Data

For a balanced, cross-sectional parallel cluster randomized trial with count outcomes, we examined several methods to determine the number of clusters (N) necessary for a given power of the hypothesis test on the intervention effect. We applied the methods by either estimating parameter inputs using analytic derivations or leveraging empirical data. We compared methods across key parameters from a two-level Poisson generalized linear mixed model and developed a novel technique to evaluate the impact of parameter uncertainty. Using the analytic approach at 80% power, we conducted a simulation-based sensitivity analysis to estimate actual power. For the empirical approach assuming we had available control cluster data, we generated sampling distributions for N then conducted a sensitivity analysis. Power was most sensitive to the anticipated intervention effect. Except for a few cases, methods were equally sufficient. Given similar power between the two approaches, the empirical approach is sufficient, but the analytic approach is recommended as control cluster data are unneeded, sampling variability is not a concern, and implementation is simpler via straightforward formulae. 

Keywords

count outcome

cluster randomized trial

parameter uncertainty

sample size

sampling variability 

Co-Author(s)

Philip Turk, Northeast Ohio Medical University
William Hillegass, University of Mississippi Medical Center
Karla Hemming, University of Birmingham
Dustin Long, Wake Forest School of Medicine
Marc Kowalkowski
Lei Zhang, University of Mississippi Medical Center

First Author

Taylor Lefler, University of Mississippi Medical Center

Presenting Author

Taylor Lefler, University of Mississippi Medical Center