Economical Methods for Experimental Design

Xianglin Zhao Chair
 
Luke Hagar Organizer
McGill University
 
Wednesday, Aug 6: 10:30 AM - 12:20 PM
0646 
Topic-Contributed Paper Session 
Music City Center 
Room: CC-102A 

Applied

Yes

Main Sponsor

SSC (Statistical Society of Canada)

Co Sponsors

Business and Economic Statistics Section
Quality and Productivity Section

Presentations

Economical Sample Size Calculations for Complex Designs

In the design of Bayesian clinical trials, the operating characteristics are typically evaluated by estimating the sampling distribution of posterior summaries via Monte Carlo simulation. It is computationally intensive to repeat this estimation process for each design configuration considered, particularly for clustered data that are analyzed using complex, high-dimensional models. We propose an efficient method to assess operating characteristics and determine sample sizes for Bayesian trials with clustered data and multiple endpoints. We prove theoretical results that enable posterior probabilities to be modeled as a function of the sample size. Using these functions, we assess operating characteristics at a range of sample sizes given simulations conducted at only two sample sizes. The applicability of our methodology is illustrated using a current cluster-randomized Bayesian adaptive clinical trial with multiple endpoints. 

Co-Author

Shirin Golchi, McGill University

Speaker

Luke Hagar, McGill University

Batch Sequential Experimental Design for Calibration of Stochastic Simulation Models

Calibration of expensive simulation models involves an emulator based on simulation outputs generated across various parameter settings to replace the actual model. Noisy outputs of stochastic simulation models require many simulation evaluations to understand the complex input-output relationship effectively. Sequential design with an intelligent data collection strategy can improve the efficiency of the calibration process. The growth of parallel computing environments can further enhance calibration efficiency by enabling simultaneous evaluation of the simulation model at a batch of parameters within a sequential design. This article proposes novel criteria that determine if a new batch of simulation evaluations should be assigned to existing parameter locations or unexplored ones to minimize the uncertainty of posterior prediction. Analysis of several simulated models and real-data experiments from
epidemiology demonstrates that the proposed approach results in improved posterior predictions. 

Speaker

Ozge Surer, Miami University

Bayesian Optimal Designs for Experiments on Networks

We consider the problem of designing an experiment in which experimental units are connected on a network. To find optimal designs for such experiments, the experimental outcomes are assumed to follow a network-outcome model in which units potentially influence one another. Due to network interference, these outcome models are often complex, and design criteria derived based on such models involve unknown parameters, and thus cannot be directly evaluated without making assumptions about these parameters' values. We mitigate this problem by defining a Bayesian design criterion, which is the mean squared error of the average treatment effect estimator integrated over a prior distribution for the unknown parameters. In general, this criterion does not have a closed-form formula, and so traditional algorithms to solve for optimal designs cannot be applied. Instead, we propose and study the use of the genetic algorithm to find near-optimal designs. Through simulations with various real-life networks and network-outcome models, we demonstrate the robust performance of our method compared to existing design construction strategies.   

Co-Author(s)

Stefan Steiner, University of Waterloo
Nathaniel Stevens, University of Waterloo

Speaker

Trang Bui

Experimental Design in Observational Data

Observational Data is difficult to design as the controlled aspect of Design of Experiments is not present in Observational Data. Specifically in Marketing, we use Observational Experimental Design Techniques like Augmented Synthetic Controls to create an accurate control to gauge treatment performance. This approach uses simulations to understand potential outcomes from a test, and evaluates the best set of locations for testing. We have shown this approach to be successful in proving marketing efficacy, and is useful for inclusion in calibrating media-mix models.  

Speaker

Shane Bookhultz, Tombras

An Adaptive Enrichment Design Using Bayesian Model Averaging for Selection and Threshold-Identification of Tailoring Variables

Precision medicine is transforming healthcare by personalizing treatments, improving outcomes, and reducing costs. Clinical trials increasingly target patient subgroups with better treatment responses. Biomarker-driven adaptive enrichment designs, which start with a general population and later focus on treatment-sensitive individuals, are gaining popularity. Inspired by a study on positive airway pressure for sleep apnea and cardiovascular outcomes, we propose a Bayesian adaptive enrichment design. It dynamically identifies key biomarkers using free knot B-splines and Bayesian model averaging. Interim analyses assess biomarker-defined subgroups, allowing early trial termination for efficacy or futility and restricting enrollment to treatment-sensitive patients. We address pre-categorized and continuous biomarkers with complex, nonlinear relationships and compare our design to a standard fixed-cutoff approach through simulations. 

Speaker

Lara Maleyeff