Tuesday, Aug 5: 10:30 AM - 12:20 PM
4108
Contributed Papers
Music City Center
Room: CC-104E
Main Sponsor
Biopharmaceutical Section
Presentations
Phase III clinical trials are costly and involve enrolling and treating hundreds or thousands of patients at multiple sites. The time, cost, and economic value of a drug upon completion are uncertain. We address the problem of determining when and how many test sites to open and at what rate to recruit patients. We model the problem as a discrete-time, discounted dynamic program aimed at maximizing the expected net present value of a drug, considering trial costs, the likelihood of approval based on the drug's quality, and its subsequent expected revenue if approved. The optimal policy is characterized by thresholds for the number of patients enrolled over time, which indicate when additional test centers should be opened and how many patients to target. Using data from completed trials, we show that these thresholds are relevant for decision-making, especially for low- to moderate-valued drugs. We also extend the model to account for multiple interim analyses and demonstrate that optimizing clinical trial capacity and utilization adds significant value, in addition to the option value of stopping the trial early
Keywords
pharmaceutical drug development
clinical trial
R&D project management
optimal investment
Co-Author
Zhili Tian, University of Houston
First Author
Hong Li, University of California, Davis
Presenting Author
Hong Li, University of California, Davis
Conditional power has been used to define futility boundaries, for sample size re-estimation, and for decision making at in interim point in a clinical trial. The general question is "What is the probability the trial will succeed given what we observe today?" Useful answers to this question depend on how well the treatment effect for the rest of the trial can be approximated. Related quantities of predictive probability of success and, from the beginning of the trial, probability of success according to a prior distribution (average power) will be computed and discussed. The gsDesign R package and its Shiny interface will be discussed. We will discuss how to do computations that are easily interpretable and usable for customers of the quantities derived. We will also caution against problematic uses and interpretation of conditional power.
Keywords
conditional power
interim analysis
group sequential design
adaptive design
The primary objective of Phase I oncology trials is to assess the safety and tolerability of novel therapeutics. Conventional dose escalation methods identify the maximum tolerated dose (MTD) based on dose-limiting toxicity (DLT). However, as cancer therapies have evolved from chemotherapy to targeted therapies, these traditional methods have become problematic. Many targeted therapies rarely produce DLT and are administered over multiple cycles, potentially resulting in the accumulation of lower-grade toxicities, which can lead to intolerance, such as dose reduction or interruption. To address this issue, we proposed dual-criterion designs that find the MTD based on both DLT and non-DLT-caused intolerance. We considered the model-based design and model-assisted design that allow real-time decision-making in the presence of pending data due to long event assessment windows. Compared to DLT-based methods, our approaches exhibit superior operating characteristics when intolerance is the primary driver for determining the MTD and comparable operating characteristics when DLT is the primary driver.
Keywords
Phase I trials
Bayesian design
Dose optimization
In clinical trials, new treatment options may present a favorable safety profile but demonstrate efficacy comparable to existing treatments. Traditional trial designs often prioritize efficacy as the primary endpoint, requiring large sample sizes for adequate power and often disregarding the safety benefits of new treatments. To address this, we propose a novel composite endpoint that integrates efficacy and safety for clinical utility, employing a weighted approach in clinical trial design. This study explores efficacy and safety endpoints as binary variables in a randomized clinical trial with two arms (control and test). We evaluate the operating characteristics of a combined hypothesis on clinical utility using this weighted method. Our proposed design shows superior operating characteristics for treatments with satisfactory efficacy and enhanced safety compared to designs focusing solely on efficacy. This approach not only leverages existing therapeutic insights related to both efficacy and safety but also considers the overall efficiency of the drug development from key stakeholders. The importance of balancing efficacy and safety is emphasized for optimal implementation.
Keywords
Composite endpoint
Clinical Trial Design
Binary Endpoint
In pharmaceutical development, comparisons with existing treatments are important from both clinical practice and cost-effectiveness perspective. Although head-to-head trials can sometimes be conducted for direct comparisons, time and budget constraints often limit their feasibility. When no head-to-head trial exists, indirect comparison methods using published studies are commonly employed. Despite their practicality, these methods aim at published studies as a target population, but this is sometimes questionable.
In this presentation, we propose a trial design and method to address this limitation even when only summary-level data are available. Specifically, a small active-control arm is included within the confirmatory trial whose primary objective is to compare an investigational drug with placebo. We propose a data integration method that employs a generalized entropy balancing approach to efficiently combine data from the trial with external summary-level data. This method not only enhances efficiency but also ensures double robustness, providing reliable and comprehensive results. We will present both the theoretical properties of our method and the simulation results.
Keywords
Indirect comparison
Summary-level data
Data integration
Generalized entropy balancing
A growing body of research indicates that drug candidates with genetically supported targets have a significantly higher probability of success compared to those without such evidence (Minikel et al., 2023). Mendelian Randomization (MR), a method that leverages genetic variants as instrumental variables, is an ideal approach for inferring causal relationships. Eli Lilly is not alone in using MR to inform clinical decision-making, and companies like Alector and GSK are employing similar strategies to explore PGRN targeting for Alzheimer's treatment. However, our unique proposed approach has successfully identified and validated gene targets preclinically. More recently, this tool has been applied to inform the clinical development of LY3848575, an epiregulin antagonist for polyneuropathic pain (Study CYAB: NCT06568042). Our analysis explored its pathways in both diabetic peripheral neuropathic pain and polyneuropathic pain, revealing comparable effect sizes. Overall, this approach extends beyond traditional clinical trial design by integrating genetic data into decision-making. In this talk, we will provide MR overview and share key results.
Keywords
Mendelian Randomization
Polyneuropathic Pain
Diabetic Peripheral Neuropathic Pain
Epiregulin Antagonist
Clinical Trial Design
Binary outcomes are frequently used across various therapeutic areas. Integrating prognostic baseline covariates leads to more robust hypothesis testing. Despite their widespread use, there is no standardized approach across the industry, and different methods are applied with no clear pattern. An assessment of clinical studies revealed diverse methods such as Cochran-Mantel-Haenszel (CMH), Mantel-Haenszel (MH) estimation with Wald test, logistic regression, and Miettinen and Nurminen (MN), among others.
Current literature and FDA guidance do not adequately address the comparative performance of these methods. We aim to enhance our understanding of potential methods for binary data analysis by evaluating their relative efficiency under varied statistical assumptions and clinical settings. Our goal is to develop a quantitative framework to identify the appropriate analysis method(s) that maximize the probability of trial success. This involves considering trial characteristics across therapeutic areas and optimizing method selection for protocol development and regulatory engagement.
Keywords
Binary endpoints
Cochran-Mantel-Haenszel (CMH)
Mantel-Haenszel (MH) estimation
Miettinen and Nurminen (MN)