Advances in Bayesian Methods for Complex Structured Data and Dependency Modeling.

Christopher Franck Chair
Virginia Tech
 
Jyotishka Datta Organizer
Virginia Tech
 
Sunday, Aug 3: 2:00 PM - 3:50 PM
0813 
Topic-Contributed Paper Session 
Music City Center 
Room: CC-106A 

Applied

No

Main Sponsor

Section on Bayesian Statistical Science

Co Sponsors

International Indian Statistical Association
International Society for Bayesian Analysis (ISBA)

Presentations

High-dimensional Bernstein Von-Mises theorems for covariance and precision matrices

This paper aims to examine the characteristics of the posterior distribution of covariance/precision matrices in a ``large $p$, large $n$" scenario, where $p$ represents the number of variables and $n$ is the sample size. Our analysis focuses on establishing asymptotic normality of the posterior distribution of entire covariance/precision matrices under specific growth restrictions on $p_n$ and other mild assumptions. In particular, the limiting distribution is a symmetric matrix variate normal distribution whose parameters depend on the maximum likelihood estimate. Our results hold for a wide class of prior distributions which includes standard choices used by practitioners. Next, we consider Gaussian graphical models that induce precision matrix sparsity. The posterior contraction rates and asymptotic normality of the corresponding posterior distribution are established under mild assumptions on the prior and true data-generating mechanism. 

Keywords

High-dimensional covariance estimation

Bernstein–von Mises theorem 

Co-Author

Partha Sarkar, Florida State University

Speaker

Partha Sarkar, Florida State University

Responsibly Emboldening Spatially Dependent Predictions

Boldness-recalibration enables better decision via responsible emboldening of probability predictions under the assumption the predictions are independent.  However, many scenarios involve probability predictions with spatial dependencies, such as election modeling, weather forecasting, species occurrence modeling, and more. In this presentation, we extend boldness-recalibration towards spatial calibration of probability predictions of this nature.  We demonstrate how to leverage spatial relationships to responsibly embolden probability predictions, while maintaining a user-specified probability of calibration. We compare the efficacy of boldness-recalibration with and without explicitly accounting for spatial dependencies. 

Co-Author

Christopher Franck, Virginia Tech

Speaker

Adeline Guthrie, Virginia Tech

Exact Optimality of the Horseshoe+ Prior

This talk considers the exact optimality of the horseshoe+ prior in estimation and multiple testing of multivariate normal means under sparsity. First, the posterior means under the horseshoe+ prior are shown to be minimax as point
estimates of the multivariate normal means under sparsity. Then, under the framework of Bogdan et al. (2011), a thresholding multiple testing procedure using the horseshoe+ prior is shown to attain asymptotic Bayes optimality under sparsity (ABOS) property. As the choice of the global parameter is based on the unknown sparsity level, we further propose an empirical Bayes approach for the multiple testing problem and demonstrate its optimality.
The paper also includes some discussions on its connection to the work of van der Pas et al. (2016) and Bhadra et al. (2017). 

Co-Author(s)

Zikun Qin, NYU Langone Health
Malay Ghosh, University of Florida
Sayantan Paul

Speaker

Zikun Qin, NYU Langone Health

Bayesian Approaches for Modeling Changes in the Covariance Structure and the Dependency Graph

Many dynamic and random processes in nature go through sudden changes. Changes in the mean structure and related changepoint detection problems have been studied widely in the literature. However, changes can also occur in the covariance structure and the related dependency graph structure with applications in economics, biological sciences etc. This work investigates modeling such changes and related changepoint detection problems in the Bayesian setup and establishes relevant theoretical results. 

Speaker

Nilabja Guha, University of Manchester

Horseshoe-type Priors for Independent Component Estimation

We provide a novel latent variable representation of independent component analysis, that enables both point estimates via expectation-maximization (EM) and full posterior sampling via Markov Chain Monte Carlo (MCMC) algorithms for fast implementation. Our method also applies to flow-based methods for nonlinear feature extraction. We discuss how to implement conditional posteriors and envelope-based methods for optimization. Through this representation hierarchy, we unify a number of hitherto disjoint estimation procedures. We illustrate our methodology and algorithms on a numerical example. Finally, we conclude with directions for future research. 

Keywords

Bayesian, Structure learning, high-dimensional. 

Speaker

Jyotishka Datta, Virginia Tech