Distributionally Robust Posterior Sampling - A Variational Bayes Approach

Bennett Zhu Co-Author
 
David Blei Co-Author
Columbia University-Data Science Institute
 
Bohan Wu First Author
Columbia University
 
Bennett Zhu Presenting Author
 
Thursday, Aug 7: 10:50 AM - 11:05 AM
2774 
Contributed Papers 
Music City Center 
We study the problem of robust posterior inference when observed data are subject to adversarial contamination, such as outliers and distributional shifts. We introduce Distributionally Robust Variational Bayes (DRVB), a robust posterior sampling method based on solving a minimax variational Bayes problem over Wasserstein ambiguity sets. Computationally, our approach leverages gradient flows on probability spaces, where the choice of geometry is crucial for addressing different forms of adversarial contamination. We design and analyze the DRVB algorithm based on Wasserstein, Fisher-Rao, and hybrid Wasserstein-Fisher-Rao flows, highlighting their respective strengths in handling outliers, distribution shift and mixed global-local contamination. Our theoretical results establish robustness guarantees and polynomial-time convergence of each discretized gradient flow to its stationary measure. Empirical results show that DRVB outperforms the naive Langevin Monte Carlo (LMC) in generating robust posterior samples across various adversarial contamination settings.

Keywords

Variational Bayes

Distributionally Robust Inference

Wasserstein-Fisher-Rao Gradient flow

mixed global-local contamination

adversarial contamination 

Main Sponsor

Section on Statistical Learning and Data Science