Contributed Poster Presentations: Biopharmaceutical Section

Shirin Golchi Chair
McGill University
 
Monday, Aug 4: 10:30 AM - 12:20 PM
4049 
Contributed Posters 
Music City Center 
Room: CC-Hall B 

Main Sponsor

Biopharmaceutical Section

Presentations

20: A Bayesian Analysis of the EVT Effects Related to Time to Randomization

Severe ischemic stroke happens when blood flow is blocked and unable to reach the brain; this is a life-threatening emergency condition requiring treatment from doctors. Six recent clinical trials investigated the treatment effect of endovascular thrombectomy (EVT) relative to medical management. The results of these trials conclusively support EVT for patients with a large ischemic core. However, the trials vary in their median time from onset to randomization. We hypothesize that delaying treatment negatively impacts EVT effects. We fit a Bayesian linear regression with a flat prior. It models EVT's mean trial treatment effect on median time from onset to randomization. We calculated a posterior probability (PP) of a slope relationship. The results indicate that the posterior mean slope is a .0085 (PP = .9915) drop in treatment effect for every one-hour increase in median time to randomization. These results can help clinicians predict treatment effects in clinical practice and inform future clinical trials in the StrokeNet Thrombectomy Platform (STEP). 

Keywords

Combining Trials

large ischemic core

severe ischemic stroke 

Co-Author

Byron Gajewski, University of Kansas Medical Center

First Author

Katherine Gajewski, St Teresea's Academy High School

Presenting Author

Katherine Gajewski, St Teresea's Academy High School

21: A novel resampling technology of time-to-event outcome for trial simulation

A method for resampling patients from historical trial for the time-to-event outcome is lacked which is a quite common endpoint of interest in some studies. In the current practice, simulating patient level data with time-to-event outcome is fully assumption based and can be easily suspected and challenged. Therefore, resampling patient level data directly from historical clinical trial is closer to the real collected data structure and can preserve the distribution of outcome and the correlations across covariates (both baseline and postbaseline) in the simulated datasets. A novel algorithm is developed that can draw patient level samples from historical clinical trials with time-to-event outcome, given the targeted number of events, sample size and incident rate per user defined. The simulated samples will preserve the same distribution shape to the original dataset and can be used in the downstream trial simulations. Our proposed algorithm can be broadly applied to studies with time-to-event endpoint to support study design and analysis plan. 

Keywords

time-to-event

simulation

clinical trial 

Co-Author(s)

Nathan Morris, Eli Lilly and Company
Bochao Jia, Eli Lilly and Company

First Author

Chaoran Hu, Eli Lilly and Company

Presenting Author

Chaoran Hu, Eli Lilly and Company

22: A Summary of Phase I Clinical Trials

Phase I clinical trials are for finding the effective and safe dose for a new drug in humans. Phase I clinical trials are also known as toxicity trials. An objective for safety is to find the maximum tolerated dose (MTD). There are several different designs for finding MTD. These include the 3+3 design, P+Q design, BOIN design, and CRM design. The MTD is defined as the highest possible dose of a drug that achieves a treatment effect with out unwanted side effects. Phase II trials typically assess the efficacy of a drug and phase III trials compare the safety and efficacy of a drug with a standard of care or placebo. Phase IV trials look at a drug over a long period of time and in assess efficacy and safety. 

Keywords

Phase I clinical trials





maximum tolerated dose

3+3 design

P+Q design

BOIN design

CRM design 

Co-Author(s)

Xiaoyong Wu, University of Cincinnati
Anand Seth, Research Mentor
Jianmin Pan, University of Cincinnati

First Author

Shesh N. Rai, Biostats, Health Inform & Data Sci | College of Medicine

Presenting Author

Jayesh Rai, University of Cincinnati

23: Adjusted Inference for Multiple Testing Procedure in Group-Sequential Designs

In confirmatory clinical trials that employ multiple testing procedures, rigorous statistical inference is paramount to ensure validity. Two primary approaches exist to control for multiplicity: the first adjusts the significance levels and compares the unadjusted p-values against their corresponding adjusted thresholds, while the second adjusts p-values directly and evaluates them against the prespecified family-wise error rate (FWER). Implementing these methods in group-sequential designs, however, presents unique challenges. This work illustrates their application through examples of Weighted Bonferroni Group Sequential Design (WBGSD) and Weighted Parametric Group Sequential Design (WPGSD), highlighting practical considerations and interpretation. 

Keywords

clinical trial

multiplicity

statistical Inference

group sequential design

adjusted p-value

weighted parametric test 

Co-Author(s)

Yujie Zhao, Merck & Co., Inc.
Linda Zhiping Sun
Keaven Anderson, Merck & Co., Inc.

First Author

Qi Liu, Merck & Co., Inc.

Presenting Author

Qi Liu, Merck & Co., Inc.

24: Advancing Study Design: Insights into Study Power Optimization

Composite endpoint is widely used in clinical trials to evaluate the efficacy of a new treatment or intervention, as it combines multiple clinically relevant outcomes into a single measure. Despite its advantage, a fully quantitative evaluation of factors influencing study power remains underexplored, including correlation between endpoints, effect size, randomization ratio and sample size. These elements can significantly impact the reliability of trial conclusion, particularly when composite endpoint is used as the primary objective. This study systematically investigates how these factors impact study power when composite endpoint, defined by two continuous outcomes, are used as the primary endpoint. By examining different scenarios of study design features, we explore how variations in these factors influence study power. The findings emphasize the critical role of endpoint correlation and effect size, along with the importance of balancing sample size and randomization ratio. This work provides a foundation for extending the methodology to more complex scenarios, including composite endpoint with more than two continuous variables and alternative endpoint types. 

Keywords

Composite Endpoint

Study Power

Simulation

Clinical Trial 

Co-Author(s)

Hong Li, Takeda
Yanwei Zhang, Takeda

First Author

Shan Xiao, Takeda

Presenting Author

Shan Xiao, Takeda

25: AI/ML enabled automatic Flow Cytometry data gating and analysis

Flow cytometry profiles immune cells by detecting scattered light and fluorescent signals. Development in flow cytometry assays has enabled the capability to monitor 15 or more fluorescent probes at the same time, allowing characterizing immune cells at greater details. However, the standard practice of analyzing flow cytometry data involves a manual step called "gating", which requires the scientists to manually define the boundaries of positive/negative cells. To overcome the limitation of manual gating and allow objective analysis of clinical flow data, we developed an AI/ML gating pipeline that can identify thresholds separating positive/negative population based on cell distributions and closely follow a predefined gating hierarchy, allowing incorporation of biological information. To reliably identify cell populations that expressed at low frequency, we leveraged either negative or positive controls. In comparing with manual gating counts, the Pearson correlation coefficient surpassed 0.9 for all three abundance subgroups across the three validation datasets. In the more challenging rare event gating, 12 out of 14 immune cell subpopulations had Pearson correlation coefficient 

Keywords

Flow Cytometry

Auto Gating

AI/ML

Clinical trial 

Co-Author(s)

Hewei Zhang, Pfizer
Charles Tan, Pfizer
Eve Pickering, Pfizer
John Leech, Pfizer
Subha Madhavan

First Author

Yalei Chen

Presenting Author

Yalei Chen

26: Assessing Bias in Kaplan-Meier Estimates Under Informative Censoring in Phase II Cancer Trials

Objective: Informative censoring challenges survival analysis, particularly in estimating Progression-Free Survival at six months (PFS6). This study evaluates the impact of informative censoring and the effectiveness of Inverse Probability of Censoring Weighting (IPCW)-adjusted Kaplan-Meier (KM) estimates in reducing bias.

Methods: We conducted simulations using Piecewise Exponential Models to generate survival times under different censoring mechanisms. Simulation 1 examines two informative censoring scenarios: one where censored patients have a higher progression risk and another where they have a lower risk, assessing their respective biases and implications. Simulation 2 compares traditional KM estimates, IPCW-adjusted KM estimates, and true PFS6 across varying censoring rates and sample sizes.

Results: KM estimates overestimated survival when high-risk patients were censored, introducing bias. IPCW adjustment reduced bias but did not fully eliminate it, particularly at later time points. IPCW improves PFS estimation under informative censoring but may leave residual bias. Appropriate censoring adjustments are essential for robust survival analysis in clinical trials. 

Keywords

Kaplan-Meier Estimation

Informative Censoring

Progression-Free Survival (PFS6)

Inverse Probability of Censoring Weighting (IPCW)

Survival Analysis

Phase II Clinical Trials 

Co-Author(s)

Melissa Smith, University of Alabama at Birmingham
Charity Morgan, University of Alabama at Birmingham

First Author

Lingling Wang, University of Alabama at Birmingham

Presenting Author

Lingling Wang, University of Alabama at Birmingham

27: Bayesian Analyses and Design of Aggregated Group Sequential N-of-1 Clinical Trials

N-of-1 trials offer a personalized approach to clinical research, allowing for the evaluation of individualized treatments through repeated crossover designs. Traditional hierarchical models assume a common treatment effect distribution, which may overlook the unique characteristics of distinct patient subgroups. We proposed two methods: Bayesian clustering and a Bayesian mixed approach that combines hierarchy and clustering. These methods dynamically group patients with similar responses while allowing for individual variation. Through extensive simulations, we evaluate the impact of different grouping thresholds on clustering accuracy. The results indicate that our mixed modeling approach outperforms traditional hierarchical methods by reducing bias and enhancing the identification of subgroups. This research advances Bayesian N-of-1 trial models and contributes to the field of precision medicine. 

Keywords

N-of-1 trial 

Co-Author

Andrew Chapple, Quantum Leap Healthcare Collaborative

First Author

Md Abdullah Al-Mamun

Presenting Author

Md Abdullah Al-Mamun

28: Bayesian optimal interval design with prespecified preference in oncology combination trials

The Bayesian optimal interval (BOIN) design framework, a model-assisted approach for identifying the maximum tolerated dose (MTD) in Phase I clinical trials, has become the standard method for dose-finding in oncology. In combination dose escalation studies, we usually have studied the safety profile of each drug used as monotherapy. BOIN combination design was proposed with all possible combinations explored. However, a practical preference based on prior clinical knowledge is often available for specific dose combinations or omitting certain doses. As a result, the BOIN combination design is usually not suitable for practical use. To address this need, we have developed and evaluated a generalized BOIN combination design that incorporates the preference (BOIN-CombP). Three categories of preference are considered including preferred, lower priority, and not considered. The performance of BOIN-CombP design has been evaluated by extensive simulations. The simulations demonstrated that the probability of selecting the correct MTD increases if it is among the preferred doses while the BOIN-CombP design achieves comparable toxicity control as the BOIN Combination design (BOIN-Comb). 

Keywords

dose finding

drug combination

interval design

maximum tolerated dose 

Co-Author(s)

Haiming Zhou, Daiichi Sankyo, Inc.
Zhaohua Lu, Daiichi Sankyo Inc
Philip He, Daiichi Sankyo Inc.

First Author

Yuxuan Chen, Emory University, Rollins School of Public Health

Presenting Author

Yuxuan Chen, Emory University, Rollins School of Public Health

29: Benefit-Risk Assessment with Complex Patient Trajectories

Assessment of benefit-risk for different subgroups/strata of patients is a long-standing challenge and is of great interest for patients, industry, and regulators. More comprehensive assessments of risk and benefit
can be obtained by characterizing the joint distribution of multiple safety and efficacy outcomes and their change over time. To this end, we propose a Bayesian multivariate, discrete-time survival model for capturing the relationship between a collection of potentially recurrent safety and efficacy events. Our model can estimate overall measures of utility that tradeoff efficacy and safety for more complex forms of patient outcomes, and our model can also be used to characterize variation in these utility measures across key patient subgroups. For subgroup analyses, our Bayesian formulation generates more stable shrinkage estimates of subgroup-specific utility measures and protects against spurious subgroup findings. We demonstrate the utility of our approach with an analysis of the TIMI 50 vorapaxar cardiovascular outcomes study, which contains both primary efficacy endpoints and recurrent thrombotic adverse events. 

Keywords

benefit-risk assessment;

stratified medicine

multivariate discrete regression

potential outcomes

Bayesian posterior inference 

Co-Author(s)

Nicholas Henderson
Richard Baumgartner, Merck Research Laboratories
Shahrul Mt-Isa, MSD

First Author

Kijoeng Nam, Merck & Co., Inc.

Presenting Author

Kijoeng Nam, Merck & Co., Inc.

30: Bias correction in treatment effect estimates following data-driven biomarker cutoff selection

Predictive biomarkers play an essential role in precision medicine. Identifying an optimal cutoff to select patient subsets with greater benefit from treatment is critical and challenging for predictive biomarkers on a continuous scale. In early-stage studies, exploratory subset analyses are commonly used to select the cutoff. However, data-driven cutoff selection will often cause bias in treatment effect estimates and lead to over-optimistic expectations in the future phase III trial. In this study, we first conducted extensive simulations to investigate factors influencing the bias, including the cutoff selection rule, the number of candidates cutoffs, the magnitude of the predictive effect, and sample sizes. Our insights emphasize the need to consider bias and uncertainties from small sample sizes and data-driven selection in Go/No Go decision-making, and population and sample size planning for phase III studies. Secondly, we evaluated the performance of Bootstrap Bias Correction and the Approximate Bayesian Computation method for bias correction through simulations. We conclude by recommending the application of the two approaches in clinical practice. 

Keywords

Predictive Biomarker

Data-Driven Cutoff Selection

Estimation Bias

Subgroup Analyses

Bootstrap Bias Correction

Approximate Bayesian Computation 

Co-Author(s)

Wei Shi, Amgen
Spencer Woody, Amgen Inc
Qing Liu, Amgen Inc.

First Author

chi zhang

Presenting Author

chi zhang

31: BioPred: an R package for biomarkers analysis in precision medicine

The R package BioPred offers a suite of tools for subgroup and biomarker analysis in precision
medicine. Leveraging Extreme Gradient Boosting (XGBoost) along with propensity score weighting and A-learn-
ing methods, BioPred facilitates the optimization of individualized treatment rules (ITR) to streamline subgroup
identification. BioPred also enables the identification of predictive biomarkers and obtaining their importance
rankings. Moreover, the package provides graphical plots tailored for biomarker analysis. This tool enables clin-
ical researchers seeking to enhance their understanding of biomarkers and patient population in drug develop-
ment. 

Keywords

precision medicine

subgroup identification

predictive biomarker identification

casual inference 

Co-Author(s)

Yan Sun, AbbVie
Xin Huang, AbbVie Inc.

First Author

Zihuan Liu

Presenting Author

Zihuan Liu

32: Estimating the Prognostic Effect in Biomarker Real-World Studies

The selection of patient populations based on biomarkers is crucial for enhancing the precision and efficacy of targeted cancer therapies. An integrated evidence generation plan for targeted therapies should address critical questions related to biomarkers, including understanding prognostic effects that indicate the likelihood of overall survival.
Assessing these effects in real-world retrospective studies poses numerous challenges, such as confounding factors, missing data, and biases like immortal time bias. Although multivariate Cox proportional hazards (PH) models are widely used, they may not fully address all potential issues arising from real-world data.
In this talk, we will present a comparison of different methods for estimating the prognostic effects of biomarkers, using simulated data that mimics real-life scenarios. These methods include multivariate Cox PH models, a machine learning approach using random survival forests, and a propensity score-based method inspired by causal inference. We will also demonstrate cases with and without missing data imputation, as well as approaches for handling the immortal time bias. 

Keywords

Prognostic effect

random survival forest

immortal time bias

real-world evidence 

Co-Author

Dai Feng, AbbVie

First Author

Amber Lind, AbbVie

Presenting Author

Amber Lind, AbbVie

33: Estimation of Quantile Treatment Effects in Historical Control Data Borrowing: A BNP approach

Historical control data borrowing commonly focuses on estimation of the mean treatment effect. When the effect of covariates is of interest, the mean is computed conditional on covariates and called the conditional mean treatment effect. A mean treatment effect, however, may not adequately describe the impact of treatment when the distribution of the outcome is skewed or multimodal. This paper develops estimation of quantile treatment effects (QTEs), including conditional quantile treatment effects, in the context of historical control data borrowing. We use a Dirichlet process mixture model (DPMM) to estimate the density of the potential outcome given covariates, allowing for a flexible, data driven approach to capturing complex outcome distributions. The QTEs are estimated as the difference between the quantiles derived from the estimates of the treatment specific outcome distributions. Simulation studies demonstrate the performance of our method. 

Keywords

Bayesian non-parametric

Dirichlet process mixture models

data borrowing

quantile estimation

causal inference 

Co-Author(s)

Indrabati Bhattacharya, Florida State University
Elizabeth Slate, Florida State University

First Author

Sanwayee Kundu, Florida State University

Presenting Author

Sanwayee Kundu, Florida State University

35: Evaluating Various Futility Options in Phase III Biomarker-Driven Trial Designs

Phase III trials are essential for confirming efficacy and safety of new treatments but demand substantial time and resources. To improve efficiency, futility analysis is often conducted to stop trials early when success is unlikely. Our research focused on trials with biomarker (Bm) subgroup nested within Intent-to-treat (ITT) population and examined various futility designs: traditional, sequential (futility analysis performed on ITT first then Bm positive) and parallel (futility analysis performed on Bm positive and negative subgroups simultaneously). Using extensive simulations, we evaluated type I error, power, futility rates, average sample size and average trial duration. Results showed that sequential method, while not inflating type I error rate, exhibited a considerable low futility rate under the null, leading to higher average sample sizes and longer trial durations. In contrast, parallel futility design provided a higher futility rate under the null, lower average sample sizes, shorter trial durations while maintaining an effective type I error control and a reasonable power reduction, making it a more suitable approach for Phase III trials with Bm-defined subgroups. 

Keywords

Phase III clinical trial

Futility analysis

Key Operating Characteristics 

Co-Author(s)

Qi Yan, Daiichi Sankyo
Haiming Zhou, Daiichi Sankyo, Inc.
Wenjing Lu, Daiichi Sankyo
Amy Qin, Daiichi Sankyo
Phillip He, Daiichi Sankyo
Qing Zhou, Daiichi Sankyo

First Author

Yanning Wu

Presenting Author

Yanning Wu

36: Federated learning methods for estimating heterogeneous treatment effect using multiple data sources

Estimation of heterogeneous treatment effects (HTE) is critical for evidence-based medicine and individualized clinical decision-making. While combining data from multiple real-world studies and randomized trials allows for larger sample sizes and greater power to estimate HTE, it is statistically challenging due to factors such as cross-study heterogeneity, confounding in observational studies, and possibly inconsistent measurements across data sources. Importantly, sharing data across research sites raises concerns about data privacy. Recently, many studies have proposed methods to estimate HTE using federated learning (FL), which enables the use of data from multiple studies without sharing individual patient information across research sites. In this poster, we will compare several FL-based approaches for estimating HTE (including parametric and non-parametric machine learning approaches) and assess their performance under different realistic scenarios through simulation studies based on real-world data, providing practical recommendations. 

Keywords

Treatment effect heterogeneity

Federated learning

Personalized medicine

Combining data 

Co-Author(s)

Mingyang Shan, Eli Lilly
Ilya Lipkovich
Elizabeth Stuart, Johns Hopkins University, Bloomberg School of Public Health

First Author

Xiao Wu

Presenting Author

Xiao Wu

37: Improving Treatment Effect Precision in Randomized Controlled Trials by Leveraging Auxiliary Data

Randomized controlled trials (RCTs) are the gold standard for evaluating treatment efficacy but often suffer from small sample sizes, leading to imprecise treatment effect estimates. Recent methods improve precision by incorporating large, observational auxiliary datasets with non-randomized but similar units. By training predictive models on these data, researchers can adjust for covariates and reduce variance without compromising randomization. While prior studies applied this approach to education experiments using same-source auxiliary data, we extend it to a medical RCT with external data. We analyzed the CHOICES (CTN-0055) RCT, which included 51 participants and assessed extended-release naltrexone (XR-NTX) for individuals with HIV and substance use disorders. Using the National Health and Nutrition Examination Survey (NHANES), we develop an auxiliary model predicting recent alcohol use. We will compare methods that integrate experimental and auxiliary data against standard estimators of XR-NTX's effect on alcohol use. We expect that incorporating auxiliary data will improve the precision of treatment effect estimates beyond what is achievable with standard RCT-based method. 

Keywords

Randomized Controlled Trials (RCTs)

Causal Inference

Treatment Effect Estimation

Observational Data Integration

Covariate Adjustment

Variance Reduction 

Co-Author

Charlotte Mann, California Polytechnic State University

First Author

Lana Huynh, California Polytechnic State University, San Luis Obispo

Presenting Author

Lana Huynh, California Polytechnic State University, San Luis Obispo

38: Longitudinal Assessment of Digital Health Measures in WatchPD study

Digital health technologies provide objective measures of Parkinson's disease (PD). This 12-month multicenter study assessed 82 individuals with early, untreated PD and 50 controls using a smartwatch, smartphone, and sensors. Participants completed clinic-based assessments and at-home tasks, including wearing a smartwatch for seven days and bi-weekly motor, speech, and cognitive tasks.Baseline measures, including arm swing, tremor, and finger tapping, differed significantly between groups. Longitudinal analyses showed declines in gait, increased tremor, and modest speech changes. Arm swing decreased from 25.9° to 19.9° (P = 0.004), and tremor time increased from 19.3% to 25.6% (P < 0.001). Changes in digital measures often exceeded changes in clinical scale items but not the overall scale. Findings from 44 participants in the WatchPD extension at month 36 will also be presented, demonstrating the potential of digital measures to track progression and assess therapeutics, despite challenges in data capture and study design. 

Keywords

Parkinson’s disease

Generalized Additive Model

Longitudinal study 

Co-Author(s)

Jamie Adams, Center for Health + Technology, University of Rochester Medical Center, Rochester, NY, USA
Tairmae Kangarloo, Takeda Pharmaceuticals, Cambridge, MA, USA
Vahe Khachadourian, Takeda Pharmaceuticals, Cambridge, MA, USA
Brian Tracey, Takeda Pharmaceuticals, Cambridge, MA, USA
Dmitri Volfson, Takeda
Robert Latzman, Takeda Pharmaceuticals, Cambridge, MA, USA
Joshua Cosman, AbbVie Pharmaceuticals, North Chicago, IL, USA
Jeremy Edgerton, Biogen
David Anderson, Clinical Ink, Horsham, PA, USA
Allen Best, Clinical Ink, Horsham, PA, USA
Melissa Kostrzebski, Center for Health + Technology, University of Rochester Medical Center, Rochester, NY, USA
Peggy Auinger, Center for Health + Technology, University of Rochester Medical Center, Rochester, NY, USA
Peter Wilmot, Center for Health + Technology, University of Rochester Medical Center, Rochester, NY, USA
Yvonne Pohlson, Center for Health + Technology, University of Rochester Medical Center, Rochester, NY, USA
Stella Jensen-Roberts, Center for Health + Technology, University of Rochester Medical Center, Rochester, NY, USA
Martijn Müller, Critical Path Institute, Tucson, AZ, USA
Diane Stephenson, Critical Path Institute, Tucson, AZ, USA View author publications
Ray Dorsey, Center for Health + Technology, University of Rochester Medical Center, Rochester, NY, USA

First Author

Yishu Gong, Takeda

Presenting Author

Yishu Gong, Takeda

39: Methods to Predict and Control Confounding Placebo Effect: Case Studies

In randomized clinical trials (RCTs) with subjective outcomes, the placebo effect is one of the major challenges in evaluating possible mechanisms related to the true therapeutic effect and is highly associated with the failure of many RCTs for many clinical trials. To address the issue, we have taken initiatives to evaluate the impact of placebo effect on RCTs, by implementing innovative statistical methods to predict and control it using baseline predictors. In this presentation, these methods have been evaluated by comprehensive simulation. In addition, case studies using these methods will be presented. 

Keywords

prediction

placebo effect

prognostic score

weighted MMRM 

First Author

Man Jin, AbbVie

Presenting Author

Man Jin, AbbVie

40: Multivariate equivalence tests in Sequential Multiple Assignment Randomized Trial designs

The Sequential Multiple Assignment Randomized Trial (SMART) is a design that involves multiple stages of randomization to evaluate dynamic treatment regimens. While most current SMART designs focus on univariate outcomes, there is a need to address complex real-world scenarios involving multivariate outcomes. In this study, we propose a multivariate framework for SMART designs. The primary objective is to assess whether continuation of responders from baseline interventions remains effective. Specifically, intersection-union test and Berger & Hsu test are adapted, and likelihood ratio non-parametric and parametric bootstrap tests are proposed to test equivalence across multiple outcomes. Simulation studies demonstrate the ability of the proposed approach to detect equivalence while maintaining control of type I error rates. This framework enhances the analytical tools available for SMART designs, offering researchers a powerful tool to optimize adaptive intervention strategies. 

Keywords

Multivariate equivalence

Intersection-union

SMART 

Co-Author

Vernon Chinchilli, Penn State University, Dept. of Public Health Sciences

First Author

Yanxi Hu, Penn State University, Dept. of Public Health Sciences

Presenting Author

Yanxi Hu, Penn State University, Dept. of Public Health Sciences

41: Personalized Dosing Decisions Using a Bayesian Exposure-Hazard Multistate Model

Multistate models provide a general framework for analyzing time-to-event data when there are multiple events of interest. We built a Bayesian competing-risk, multistate hazard model, focusing on an anticoagulant drug application where drug exposure decreases the risk of ischemic events while increasing the risk of bleeding events, both of which can ultimately lead to fatality. We present a computationally efficient strategy to estimate the steady-state exposure given only a single pair of pre- and post-dose concentration measurements using a pharmacokinetic (PK) submodel. This exposure estimate is then used in the hazard submodel as a predictor. Both submodels are estimated jointly using full Bayesian inference with Stan.
Using simulated data we evaluate the usefulness of the multistate model compared with simpler hazard models and the benefit of estimating the drug exposure using the PK model compared with using the assigned dose as a proxy for drug exposure. Finally, we demonstrate the use of patient preferences as utility scores for principled individual dose recommendations. The approach establishes a foundation for dynamic, personalized risk prediction. 

Keywords

Bayesian competing-risk

bleeding

multistate hazard model

oral anticoagulant

stroke 

Co-Author(s)

Eric Novik, Generable Inc
Robert P. Giugliano, Brigham and Women's Hospital, Department of Medicine, Harvard Medical School
Cathy Chen, Daiichi Sankyo, Inc.
Eva-Maria Fronk, DSS-EG, Daiichi Sankyo Europe GmbH
Martin Unverdorben, Daiichi Sankyo, Inc.
Matthew Clasen, Daiichi Sankyo, Inc.
C. Michael Gibson, Beth Israel Deaconess Medical Center, Harvard Medical School
Jacqueline Buros, Generable

First Author

Juho Timonen, Generable Inc

Presenting Author

Bruna Davies Wundervald, Generable

42: Practical Consideration for a Test for Proportion for Retrospective Studies in Rare Disease Settings

Retrospective studies provide a viable alternative means for collecting important clinical information in situations where conducting prospective clinical trials is difficult, such as in rare disease settings. In clinical settings where the main endpoint of interest is an event-type, e.g., successful treatment of bleeding episodes or presence of treatment-related adverse events, the length of the look-up period for each patient in the study will greatly influence the number of events that each patient will contribute to the study. Use of event-type endpoints in this setting leads to analytic models that consider correlation of events within subjects. An example of a model that allows for testing a success proportion for binary events is proposed. The resulting test takes into account the correlation of events within subjects and is shown to have a limiting normal distribution. The power of the test is examined through simulations considering small sample settings, different event rates, and varying look-up times. Results show that the power of the test for proportion is sensitive to varying lengths of the patients' look-up periods. 

Keywords

retrospective study

rare disease

event-type endpoint

correlation

look-up period

power 

Co-Author

Daniel Bonzo, LFB

First Author

Andreana Robertson, LFB USA Inc.

Presenting Author

Andreana Robertson, LFB USA Inc.

43: Precision Detection of Cell-Type-Specific Cancer-Specific RNA Editing Sites via a Novel Computational Pipeline from Single-Cell RNA Sequencing Data

Cancer remains a leading cause of death worldwide, largely due to its high heterogeneity, which complicates effective treatments. While targeted therapies based on DNA and RNA mutations have gained traction, the role of RNA editing— a crucial post-transcriptional modification that introduces nucleotide changes into the transcriptome—remains underexplored. Dysregulated RNA editing pathways have been implicated in cancer pathogenesis. However, identifying and quantifying RNA editing from single-cell RNA sequencing (scRNA-seq) data is challenging due to sparsity of such datasets. Without an optimized analytical approach, error rates can exceed 90%.
Here, we present a comprehensive pipeline tailored to address these challenges and enable reliable RNA editing site detection from scRNA-seq data. The pipeline includes a discovery phase at the sample level and a quantification phase at the single-cell level. Key features include reference-based cell barcode correction, enhanced alignment, per-cell duplicate read removal, and a statistical framework to mitigate background noise and false positives.
We applied this pipeline to scRNA-seq data from 24 patients with chronic myelomonocytic leukemia (CMML), a clonal hematologic malignancy in urgent need of new therapeutic strategies. While genetic mutations in CMML have been studied extensively, RNA editing remains unexplored. Our analysis identified 3,326 high-confidence RNA editing sites with ~92% accuracy, predominantly in intronic and 3′UTR regions-consistent with previous reports. Clustering based on RNA-editing patterns revealed biologically and clinically distinct subpopulations that diverged from conventional gene-expression clusters. Importantly, genes frequently mutated in CMML—such as FLT3, RUNX1, HAVCR2, and ITGAX—also exhibited extensive editing. A copy-number-variation (CNV)–driven approach distinguished healthy-like from malignant cells. Comparative analysis of CMML- and cluster-specific editing sites against healthy-like cells uncovered candidate diagnostic biomarkers, while complementary survival analysis identified cluster-specific prognostic markers. Together, our study presents a robust computational framework for interrogating RNA editing in single-cell data and offers novel insights into CMML pathogenesis. 

Co-Author(s)

Michael Deininger, Versiti Blood Research Institute
Surendra Neupane, Moffitt Cancer Cente
Eric Padron, Moffitt Cancer Center
Nisansala Wickramasinghe, 1Versiti Blood Research Institute

Presenting Author

Tongjun Gu, Versiti Blood Research Institute

44: Predicting Accrual and Underrepresented Biomedical Research Group Using Bayesian Methods

There has been a recent push for biomedical research to incorporate more demographically, ethnically, and medically diverse cohorts – individuals who the NIH, designates as "underrepresented in biomedical research" (UBR). In clinical trials, researchers often set out to achieve target rates of UBR enrollment yet there are no methods used to help achieve these targets. Researchers must predict rates of UBR enrollment as the study is ongoing but to do so, prediction tools are needed. One well known method uses Bayesian accrual prediction to monitor participant accrual in a trial. Here we expand upon their method by simultaneously predicting a target accrual rate of UBR participants. Our prediction and monitoring tool can simultaneously predict accrual and UBR at any point during a study. We apply our method to two real-world completed clinical trial datasets: ADORE (An Assessment of DHA On Reducing Early preterm birth) and Quit2Live - a clinical trial to examine disparities in quitting between African American and White adult smokers. We show the usefulness of this method at various time points in these trials and demonstrate that it can be used to monitor future trials. 

Keywords

Clinical Trials

Sample Size

Participants

Prior 

Co-Author(s)

Dinesh Pal Mudaranthakam
Byron Gajewski, University of Kansas Medical Center
Miranda Handke, Department of Internal Medicine at KUMC
Jeffery Thompson
Robert Montgomery, Department of Biostatistics and Data Science at KUMC
Akinlolu Ojo, Department of Internal Medicine at KUMC

First Author

Kaustubh Nimkar

Presenting Author

Kaustubh Nimkar

45: Robust and Meaningful Cost Estimates Using Medicare Fee-for-Service Data

In real-world evidence studies, estimates of healthcare costs per patient per year are often of interest. However, conventional statistical methods frequently fail to address complexities such as variable follow-up time and non-constant cost accumulation rates. Traditional methods typically estimate average costs per patient per year and geometric mean cost ratios, which can be challenging for healthcare stakeholders to interpret and often overlook how costs accrue over time. Using Medicare Fee-for-service claims data, we demonstrate more relevant estimates of costs at the group-level rather than summarizing the distribution of individual-level rates. We innovatively model cumulative costs over time and use quantile regression to estimate median costs, providing more robust and meaningful interpretations. Additionally, we examine the suitability of these methods for acute and chronic conditions. 

Keywords

real-world evidence

real-world data

costs

Medicare

quantile regression

cumulative costs 

Co-Author

Xin Zhao, Genesis Research Group

First Author

Joanna Harton, Genesis Research Group

Presenting Author

Joanna Harton, Genesis Research Group

46: Robust semi-parametric dose-response model for early phase trials

Dose-response modeling (DRM) is essential in early-stage clinical trials, where the interest is in estimating the relationship between the dose of a drug and its response. Clinically meaningful parametric forms are routinely proposed to model effects of response as a function of dose. However, if such models are mis-specified, the resulting inferences may be unreliable. To mitigate misspecification, a non-parametric DRM approach may be used. While it provides reliable inference, it may reduce inferential efficiency, even when a simpler parametric model is largely correct. To that end, as a compromise between fully parametric and fully non-parametric approaches, we propose a novel non-parametric Bayesian DRM, formulated around a pre-specified parametric DRM. This strategy will produce a dose-response curve that closely resembles the pre-specified parametric form in most regions, while allowing for deviations where necessary. We perform simulations to assess the performance of this approach including the robustness to model misspecifications, and compare with other approaches such as a fully parametric or a fully non-parametric models, model-averaging etc. 

Keywords

Dose response modeling

Semi parametric inference

Robustness

Interpretability 

Co-Author

Mallikarjuna Rettiganti, Eli Lilly and Company

First Author

Abhisek Chakraborty, Eli Lilly and Company

Presenting Author

Abhisek Chakraborty, Eli Lilly and Company

47: Scalar on Shape Regression Using Functional Data

Functional regression is a branch of functional data analysis (FDA) that deals with using functional variables in regression models as predictors, responses,
or both. Specifically, in Scalar-on-Function (ScoF) models, some functions play the role of predictors, and some scalars are treated as responses. ScoF models have widespread applications across scientific domains and are natural extensions of the standard multivariate regression models to functional data. Our focus in this paper is on the shapes (also termed amplitudes) of functions rather than the full functions themselves. This focus is motivated, for example, by problems in neuroimaging where morphologies of anatomical parts are used to predict clinical measurements, such as disease progression or treatment effects. Accordingly, we develop a regression model, called Scalar-on-Shape (ScoSh), where the shapes of functions are treated as predictors for scalar clinical responses. 

Keywords

Functional regression analysis

Shape models

COVID data analysis

Functional shapes

Shape-based FDA, 

Co-Author

Anuj Srivastava, Florida State University

First Author

Sayan Bhadra, Florida State University

Presenting Author

Sayan Bhadra, Florida State University

48: Statistical considerations of monitoring early clinical activity in dose-finding trials

Phase I study aims to determine the safety and tolerability of compounds in selected indications. In targeted therapy and immunotherapy, the objective of dose finding is often to identify the optimal biologically effective dose, rather than the maximum tolerated dose (MTD). To optimize the treatment benefit, it is important to consider toxicity and efficacy simultaneously and their risk-benefit trade-off during dose finding. With the rapid development of genomics and big-data technology in the past two decades, numerous gene signatures have been developed to guide clinical care and improve stratification of patients for tumor therapy. Gene signatures with high sensitivity and specificity can be used to stratify patients into different risk groups to predict treatment response. Differentially expressed gene analysis is commonly used to select related genes, but it is not optimal in small one-arm dose finding studies. Bayesian dose-response models were used to capture early immune response and anti-tumor activity across different doses. A fit-for-purpose strategy for the pharmacodynamic gene signature is more appropriate in early phase trials. 

Keywords

Dose finding

gene expression signature

prognostic biomarker 

Co-Author

Minyoung Lee

First Author

Xin Tong, Takeda

Presenting Author

Xin Tong, Takeda

49: Statistical designs to account for patient heterogeneity in cell therapy cancer clinical trials

This presentation describes a novel phase I statistical trial design developed to enhance the safety and efficiency of cell therapies in oncology by specifically addressing patient heterogeneity and dose-feasibility encountered in such therapies. Traditional dose-finding methods do not accommodate specific challenges encountered in cell-therapy trials, like patients not being able to receive their intended dose due to manufacturing limitations or specific groups of patients being more prone to toxicity than others. To address these issues, we incorporate statistical models that allow for the adaptive updating of dose levels based on real-time patient-data concerning both toxicity and dose-feasibility. Our design aims to calculate groups specific Feasible Maximum Tolerated Doses (FMTDs), by sharing toxicity data between groups and utilizing data observed at unplanned dose levels. We present simulation results showing the operating characteristics across multiple possible clinical scenarios and apply our design to the motivating trial. We also illustrate how sharing data between patient groups can simultaneously improve efficiency and avoid undesirable trial results like reversals. 

Keywords

Statistical trial design

cancer clinical trial

biostatistics

patient heterogeneity 

Co-Author

Nolan Wages, Virginia Commonwealth University

First Author

Evan Bagley

Presenting Author

Evan Bagley

50: Statistical Methods for Composite Endpoints Accounting for Severity of Events

Composite endpoints (CE), combining death and non-fatal events, are often used in randomized clinical trials when the incidence of individual events is low. CEs typically involve different event types, implying considerable differences in event severity and cost to the patient and healthcare system. Time-to-first-event analysis treats all components of the CE equally and is heavily influenced by short-term events, potentially misrepresenting clinical significance. Novel statistical methods have been introduced to overcome these limitations, including competing risk regression (CR), negative binomial (NB), and win ratio (WR). Joint frailty models (JFM) can account for the unobserved heterogeneity in the survival and informative censoring distributions associated with different event types and patient death. A simulation approach will be used to compare the performance of four methods – CR, NB, WR, and JFM. Performance will be assessed based on type I error, power, and ease of clinical interpretation. Best-performing approaches will then be applied to analyze the Comparative Effectiveness of an Individualized Hemodialysis model vs Conventional Hemodialysis (TwoPlus) trial. 

Keywords

composite endpoints

competing risk

negtative binomial

win ratio

joint frailty models

simulation 

Co-Author(s)

Shahidul Islam, Biostatistics Unit, Northwell Health, New Hyde Park, NY
Anand Rajan, MPH, NYU Grossman Long Island School of Medicine
Xiwei Yang, NYU Grossman Long Island School of Medicine
Jessica Guillaume, NYU Grossman Long Island School of Medicine
Mariana Murea, Wake Forest University School of Medicine
Jasmin Divers, NYU Long Island School of Medicine

First Author

Nihan Gencerliler, NYU Grossman Long Island School of Medicine

Presenting Author

Nihan Gencerliler, NYU Grossman Long Island School of Medicine

51: Transportability in the Era of Big Data: Challenges and Solutions with Large Target Populations

Transportability studies are conducted to obtain real-world evidence by extending an estimated effect from a trial sample to a target population of interest, where the trial sample is partially or fully disjointed from the target population. Inverse probability selection weighting (IPSW) and G-computation are widely used statistical methods in these studies. However, previous research highlights challenges when the trial sample is much smaller than the target population, leading to poor model estimation and biased results. This limitation can restrict the applicability of these statistical approaches when seeking evidence for broader populations. In this study, we implement a simulation study to evaluate the performance of statistical methods under varying trial-to-target population size ratios and varying relationships between time-to-event outcomes and potential effect modifiers. We hypothesize that artificially increasing the trial-to-target ratio by taking a random sample from the target population improves model performance when the initial trial-to-target ratio is low. 

Keywords

Transportability

Inverse Probability Selection Weighting (IPSW)

G-computation

Trial-to-Target Population Ratio

Simulation Study 

Co-Author(s)

Vivek Charu
I-Chun Thomas, Geriatric Research, Education and Clinical Center, Veterans Affairs Palo Alto, Palo Alto, CA
Manjula Tamura, Division of Nephrology, Department of Medicine, Stanford University School of Medicine, Palo Alto, C
Maria Montez-Rath, Stanford University

First Author

Mengjiao Huang

Presenting Author

Mengjiao Huang

52: Understanding Regulatory Expectations for the Data Management of Randomization Schedules

Regulatory authorities consider the Randomization Schedule as data critical to the integrity of a clinical trial. As cited in several regulatory guidance documents, randomization is a mandatory area evaluated during regulatory reviews, applications and final study reports. Thus, the Data Management for Randomization Schedules must follow robust processes.
For instance, ICH-E9 states that "the randomization schedule itself should be filed securely by the sponsor or an independent party in a manner that ensures that blindness is properly maintained throughout the trial." Further in the recent FDA guidance on AI-enabled devices, it affirms that "data management is also an important means of identifying and mitigating bias." While the randomization schedule data management is imperative for any type of randomized clinical trial, it may be even more critical for trials that examine AI enabled devices due to their novelty.
This presentation will provide best practices for the data management of randomization schedules for all types of randomized trials. It will summarize the relevant regulatory guidance documents and provide illustrations on how to successfully achieve compliance. 

Keywords

Randomization

Randomization Schedules

Randomization Lists

Data Management

Regulatory Guidance Review 

Co-Author(s)

Jennifer Ross, Almac Group
Noelle Sassany, Almac Group
Anna Tomas Gasco, Almac

First Author

Alicia Jones, Almac Group

Presenting Author

Alicia Jones, Almac Group