Bayesian Neural Model Selection for Symmetry Learning
Thursday, Aug 8: 8:35 AM - 8:50 AM
Invited Paper Session
Oregon Convention Center
Recent advancements in scalable Bayesian inference have enabled Bayesian model selection for deep neural networks. This statistical technique of optimizing the marginal likelihood embodies an Occam's razor effect, allowing neural network hyperparameters to be learned from training data. This enables automatically adapting neural architectures and differentiable learning of inductive biases from data. In this talk, we discuss how recent advancements in approximate inference techniques, such as the Laplace approximation and non-mean-field Variational Inference, can provide differentiable estimates of the marginal likelihood that scale to large models and datasets. We present promising examples demonstrating scalable Bayesian model selection to learn invariances, layer-wise equivariances, adapt neural architectures and inductive biases, and automatically discover conserved quantities and associated symmetries in physical systems.
You have unsaved changes.