Computationally Efficient Laplace Approximations for Neural Networks

Kshitij Khare Co-Author
University of Florida
 
Rohit K Patra Co-Author
Linkedin Inc
 
Swarnali Raha First Author
University of Florida
 
Swarnali Raha Presenting Author
University of Florida
 
Tuesday, Aug 5: 2:05 PM - 2:20 PM
2209 
Contributed Papers 
Music City Center 
Laplace approximation is arguably the simplest approach for uncertainty quantification using intractable posteriors associated with deep neural networks. While the Laplace approximation based methods are widely studied, they are not computationally feasible due to the involved cost of inverting a (large) Hessian matrix. This has led to an emerging line of work which develops lower dimensional or sparse approximations for the Hessian. We build upon this work by proposing two novel sparse approximations of the Hessian: (1) greedy subset selection, and (2) gradient based thresholding. We show via simulations that these methods perform well when compared to current benchmarks over a broad range of experimental settings.

Keywords

Laplace Approximation

Uncertainty Quantification

Posterior predictive distribution

Hessian matrix

Subset selection 

Main Sponsor

Section on Bayesian Statistical Science