A Bayesian decision-theoretic approach to sparse estimation
Wednesday, Aug 6: 10:35 AM - 10:50 AM
1296
Contributed Papers
Music City Center
We extend the work of Hahn & Carvalho (2015) and develop a doubly-regularized sparse regression estimator by synthesizing Bayesian regularization with penalized least squares within a decision-theoretic framework. In contrast to existing Bayesian decision-theoretic formulation chiefly reliant upon the symmetric 0-1 loss, the new method – which we call Bayesian Decoupling – employs a family of penalized loss functions indexed by a sparsity-tuning parameter. We propose a class of reweighted l1 penalties, with two specific instances that achieve simultaneous bias reduction and convexity. The design of the penalties incorporates considerations of signal sizes, as enabled by the Bayesian paradigm. The tuning parameter is selected using a posterior benchmarking criterion, which quantifies the drop in predictive power relative to the optimal Bayes estimator under the squared error loss. Additionally, in contrast to the widely used median probability model technique which selects variables by thresholding posterior inclusion probabilities at the fixed threshold of 1/2, Bayesian Decoupling enables the use of a data-driven threshold which automatically adapts to estimated signal sizes.
Decision theory
Loss function
Model selection
Penalized least squares
Sparse estimation
Tuning parameter selection
Main Sponsor
Section on Bayesian Statistical Science
You have unsaved changes.