Computationally Efficient Laplace Approximations for Neural Networks
Tuesday, Aug 5: 2:05 PM - 2:20 PM
2209
Contributed Papers
Music City Center
Laplace approximation is arguably the simplest approach for uncertainty quantification using intractable posteriors associated with deep neural networks. While the Laplace approximation based methods are widely studied, they are not computationally feasible due to the involved cost of inverting a (large) Hessian matrix. This has led to an emerging line of work which develops lower dimensional or sparse approximations for the Hessian. We build upon this work by proposing two novel sparse approximations of the Hessian: (1) greedy subset selection, and (2) gradient based thresholding. We show via simulations that these methods perform well when compared to current benchmarks over a broad range of experimental settings.
Laplace Approximation
Uncertainty Quantification
Posterior predictive distribution
Hessian matrix
Subset selection
Main Sponsor
Section on Bayesian Statistical Science
You have unsaved changes.