The Boons of Being Less Bayesian: a study of partially stochastic neural networks

Eric Nalisnick Speaker
Johns Hopkins University
 
Thursday, Aug 8: 9:35 AM - 9:50 AM
Invited Paper Session 
Oregon Convention Center 
Bayesian approaches have the potential to mitigate problems with neural networks (NNs) such as overconfidence and lack of robustness. However, computation is a major obstacle to performing high-fidelity posterior inference. In this talk, I will first present our research on scalable variational approximations based on subnetworks. Only a subset of the NN is given a Bayesian treatment, and we find this is enough to perform competitive uncertainty estimation. I will then go on to further justify subnetwork inference, not simply for its computational benefits, but from the theoretical insight that these NNs have as rich a posterior predictive distribution as fully-stochastic NNs. Moreover, across various inference schemes, we observe no empirical benefit to using fully stochastic NNs. I will close by questioning whether a fully-Bayesian treatment of NNs can ever have a benefit.