Minimax Bayesian Predictive Inference with the Horseshoe Prior
Percy Zhai
Presenting Author
The University of Chicago
Tuesday, Aug 5: 10:05 AM - 10:20 AM
0997
Contributed Papers
Music City Center
This work is focused on distributional prediction of a high-dimensional Gaussian vector with a sparse mean, the accuracy of which measured by the Kullback-Leibler loss. Several priors have been considered in the current literature, including discrete priors and Laplace priors deployed inside the spike-and-slab framework. This work complements the toolbox by considering the Horseshoe prior. We start with the oracle case where the sparsity level is known, and demonstrate that the Horseshoe prior provides a predictive risk that attains minimaxity with a properly calibrated parameter. Without the knowledge of sparsity level, we consider the full Bayes method that imposes a hierarchical prior based on the Horseshoe, which reaches a minimax rate adaptively. These hierarchical priors are continuous and fully automatic (i.e. without the need to specify hyper-parameters), and are therefore easy to implement. Since the Horseshoe is a continuous mixture of Gaussian priors, the predictive density can be written as a continuous mixture of normal densities, making the predictive inference computationally inexpensive, a property desired by the practitioners.
Horseshoe Prior
Predictive Inference
Sparse Normal Means
Kullback-Leibler Loss
Asymptotic Minimaxity
Main Sponsor
International Society for Bayesian Analysis (ISBA)
You have unsaved changes.