Modern Developments in Bayesian Statistics

Sabina Sloman Chair
 
Tuesday, Aug 5: 8:30 AM - 10:20 AM
4093 
Contributed Papers 
Music City Center 
Room: CC-209A 

Main Sponsor

International Society for Bayesian Analysis (ISBA)

Presentations

A Framework for Simulating Populations to Quantify Uncertainty

Applied areas such as, epidemiology, social policy, transportation, etc., often rely on complex simulation models (e.g., agent-based) to assess the viability of potential mitigation and/or policy strategies. Among other inputs, these models tend to require specific, individual-level details for entire populations of interest; e.g., the number, age, and income for every home in a municipality. Yet, rarely is such detail available, or even possible to collect. Some success has resulted from pairing simulation models with synthetic population generators (e.g., iterated conditional models and other imputation methods), but challenges remain in such cases. In particular, describing and accounting for uncertainty in analyses imposed by the use of synthetic populations remains a difficult task. In this paper, we develop approaches for generating synthetic populations a posteriori, which can be incorporated directly into simulation-based analyses for subsequent inference. 

Keywords

social policy

simulation models

agent-based models

synthetic populations

population modeling

uncertainty quantification 

Co-Author(s)

David Higdon, Virginia Tech
Leanna House, Virginia Tech

First Author

Christopher Grubb, Virginia Tech

Presenting Author

Christopher Grubb, Virginia Tech

WITHDRAWN A Novel Bayesian Hierarchical Approach for the Joint Modeling of Binary and Continuous Outcomes

In industrial and medical fields, it is common for experiments to generate data with both quantitative and qualitative (QQ) outcomes, along with a set of predictors that may influence these outcomes. Accurately modeling these outcomes while accounting for their inherent associations and identifying a subset of significant predictors is crucial for improving prediction and optimization. To address this, we propose an innovative Bayesian hierarchical model that employs a conditional approach for the continuous outcome based on the binary outcome. This model enables simultaneous parameter estimation and variable selection within the joint modeling framework for QQ outcomes. We develop an efficient Markov chain Monte Carlo (MCMC) sampling algorithm that integrates collapsed Gibbs sampling and the Metropolis-Hastings algorithm to compute posterior probabilities, facilitate feature selection, and optimize processes. Simulation studies are conducted to compare the performance of the proposed Bayesian method against several existing approaches in the literature. Finally, we illustrate the model's practical application with a real-data example. 

Keywords

Joint modeling

Bayesian hierarchical modeling

MCMC methods

Variable selection 

Co-Author

Prince Buti

First Author

Min Wang, University of Texas At San Antonio

Bayesian Hierarchical Penalized Spline Models in Stepped Wedge Cluster Randomized Trials

Traditional frequentist methods may not provide adequate coverage of an intervention's true effect using confidence intervals in the context of stepped wedge cluster randomized trials (SWCRTs), whereas Bayesian approaches remain underexplored in SWCRTs. To bridge this gap, we propose two innovative Bayesian hierarchical penalized spline models. Our first model focuses on immediate intervention effects. We then extend it to account for time-varying intervention effects. Through extensive simulations and the real-world application, we demonstrate the robustness of our proposed Bayesian models. Notably, the Bayesian immediate effect model consistently achieves the nominal coverage probability, providing more reliable interval estimations while maintaining high estimation accuracy. Furthermore, the Bayesian time-varying effect model represents a significant advancement over the existing Bayesian monotone effect curve models, offering improved accuracy and reliability in estimation, while also achieving higher coverage probability than alternative frequentist methods. To the best of our knowledge, this marks the first development of Bayesian hierarchical spline modeling for SWCRTs. 

Keywords

Cluster randomized trial

Bayesian hierarchical models

Penalized spline

Time-varying treatment effect

Stepped wedge 

Co-Author(s)

Hyung Park, New York University Grossman School of Medicine
Corita Grudzen, Memorial Sloan Kettering Cancer Center
Keith Goldfeld, New York University Grossman School of Medicine

First Author

Danni Wu, Harvard University

Presenting Author

Danni Wu, Harvard University

Bayesian multinomial multilevel logistic regression with fixed- and random-effects selection

We propose a novel Bayesian multinomial multilevel logistic regression model for any setting where each observational unit belongs to a specific group and a non-ordered category. To allow for automatic selection of both fixed- and random-effects, we use Pólya-Gamma data-augmentation and develop an efficient Gibbs sampling algorithm via a hierarchical spike-and-slab prior. Inference is fast and does not rely on any analytical approximations or numerical integration. Guidelines for user-defined prior selection are developed, where the user can, for example, specify the apriori expected number of fixed- and random-effects. Simulations show that our approach is accurate and can discriminate well between different settings of fixed- and random-effects. To demonstrate the general applicability of our approach, we show that our model also applies well to three real datasets within education, medical and political sciences. 

Keywords

Bayesian inference

Multinomial logit

Hierarchical modeling

Gibbs sampling

Polya-Gamma augmentation

Spike and slab 

Co-Author

Hector Rodriguez-Deniz, Data Science Institute, Columbia University

First Author

Bertil Wegmann, Linköping University

Presenting Author

Bertil Wegmann, Linköping University

Estimating the Number of Components in Finite Mixture Models via Variational Approximation

This work introduces a new method for selecting the number of components in finite mixture models (FMMs) using variational Bayes, inspired by the large-sample properties of the Evidence Lower Bound (ELBO) derived from mean-field (MF) variational approximation. Specifically, we establish matching upper and lower bounds for the ELBO without assuming conjugate priors, suggesting the consistency of model selection for FMMs based on maximizing the ELBO. As a by-product of our proof, we show that the MF approximation inherits the stable behavior of the posterior distribution, which benefits from model singularity and tends to eliminate the extra components under model overspecification. This stable behavior also leads to the $n^{-1/2}$ convergence rate for parameter estimation, up to a logarithmic factor, under model overspecification. Empirical experiments validate our theoretical findings and compare them with other advanced methods for selecting the number of components in FMMs. 

Keywords

Finite mixture models

Model selection

Evidence lower bound

Mean-field approximation

Singular models 

Co-Author

Yun Yang, University of Illinois Urbana-Champaign

First Author

Chenyang Wang

Presenting Author

Chenyang Wang

Identifying Treatment Effect Heterogeneity with Bayesian Hierarchical Adjustable Random Partition

In precision medicine, to identify sensitive population and direct treatment decisions, it is essential to investigate treatment effect heterogeneity by estimating subgroup-specific responses and identifying homogeneity patterns. However, conducting comparison between multiple interventions among potential subgroups is challenging. To increase power and precision, many Bayesian models partition subgroups into information-borrowing clusters, yet two challenges persist: capturing the uncertainty in partitioning configurations and adapting the strengths of borrowing. We propose a flexible Bayesian hierarchical model that relies on a mixture prior with variable number of components. For each intervention, the model partitions subgroups into mutually exclusive clusters, borrowing information within each cluster. To estimate the posterior distribution, we use a reversible jump MCMC approach that explores different partitions while adjusting borrowing strength based on within-cluster variability. We also introduce a Bayesian adaptive enrichment design to merge equivalent subgroups, enrich responsive subgroups and terminate futile arms, improving efficiency and flexibility. 

Keywords

Precision Medicine

Bayesian Adaptive Trials

Bayesian Hierarchical Model

Finite Mixture Model

Reversible jump Markov Chain Monte Carlo

Random Partition 

Co-Author

Shirin Golchi, McGill University

First Author

Xianglin Zhao

Presenting Author

Xianglin Zhao

Minimax Bayesian Predictive Inference with the Horseshoe Prior

This work is focused on distributional prediction of a high-dimensional Gaussian vector with a sparse mean, the accuracy of which measured by the Kullback-Leibler loss. Several priors have been considered in the current literature, including discrete priors and Laplace priors deployed inside the spike-and-slab framework. This work complements the toolbox by considering the Horseshoe prior. We start with the oracle case where the sparsity level is known, and demonstrate that the Horseshoe prior provides a predictive risk that attains minimaxity with a properly calibrated parameter. Without the knowledge of sparsity level, we consider the full Bayes method that imposes a hierarchical prior based on the Horseshoe, which reaches a minimax rate adaptively. These hierarchical priors are continuous and fully automatic (i.e. without the need to specify hyper-parameters), and are therefore easy to implement. Since the Horseshoe is a continuous mixture of Gaussian priors, the predictive density can be written as a continuous mixture of normal densities, making the predictive inference computationally inexpensive, a property desired by the practitioners. 

Keywords

Horseshoe Prior

Predictive Inference

Sparse Normal Means

Kullback-Leibler Loss

Asymptotic Minimaxity 

Co-Author

Veronika Rockova, The University of Chicago

First Author

Percy Zhai, The University of Chicago

Presenting Author

Percy Zhai, The University of Chicago