Implicit Generative Prior for Bayesian Neural Networks
Monday, Aug 5: 9:05 AM - 9:20 AM
Invited Paper Session
Oregon Convention Center
Bayesian neural networks are a powerful tool for characterizing predictive uncertainty, but they face two challenges in practice. First, it is difficult
to define meaningful priors for the weights of the network. Second, conventional computational strategy becomes impractical for large and complex applications. In this paper, we adopt a class of implicit generative priors and propose a novel neural adaptive empirical Bayes framework for Bayesian modeling and inference. These priors are derived through a nonlinear transformation of a known low-dimensional distribution, allowing us to handle complex data distributions and capture the underlying manifold structure effectively. Our framework combines variational inference with a gradient ascent algorithm, which serves to select the hyperparameter and approximate the posterior distribution. Theoretical justification is established through both the posterior and classification consistency. We demonstrate the practical applications of our framework through extensive examples, including two-spiral problem, regression, and 10 UCI datasets, as well as MNIST image classification. The results of our experiments highlight the superiority of o
You have unsaved changes.