Advancing Approximate Bayesian Inference and Model Selection for Neural Networks
Tuesday, Aug 6: 10:35 AM - 10:55 AM
Invited Paper Session
Oregon Convention Center
In an ideal world, state-of-the-art machine learning techniques, such as deep neural networks, would provide accurate measures of uncertainty in addition to assurance that the many modeling choices leading to a final trained model have not led to suboptimal or misleading results. Bayesian neural networks offer built-in uncertainty quantification, essential for high-stakes decision-making and building trust in the algorithm, but model choice is still an unsolved problem. While in practice we may pay a lot of attention to the choice of neural network architecture, the integrity of probabilistic predictions from Bayesian neural networks also rests on another key element: the selected prior distribution over the parameters. Recent works suggest that widely-used default prior choices can lead to poor quantification of uncertainty, but guidance for selecting priors (and the potential impact of making different prior choices) is severely understudied. We develop and implement Bayesian model selection methods for quantitatively assessing prior-model choices in Bayesian neural networks with the ultimate goal of providing more reliable inference in Bayesian neural networks.
You have unsaved changes.