Gradient flows for empirical Bayes in high-dimensional linear models
Wednesday, Aug 6: 11:00 AM - 11:25 AM
Invited Paper Session
Music City Center
This talk will describe an adaptive Langevin diffusion sampling procedure for empirical Bayes learning of the prior distribution in a random effects linear model. The procedure may be implemented either parametrically or nonparametrically, and both forms may be motivated from a Wasserstein gradient flow approach to maximizing the marginal log-likelihood. I will discuss some basic consistency results for the (possibly nonparametric) maximum likelihood estimator in this problem, and then describe an exact asymptotic analysis of the learning dynamics in a setting of i.i.d. random design using dynamical mean-field theory.
Joint work with Leying Guan, Yandi Shen, Yihong Wu,. Justin Ko, Bruno Loureiro, and Yue M. Lu.
Empirical Bayes
Nonparametric maximum likelihood
Wasserstein gradient flow
Log-Sobolev inequality
High-dimensional regression
You have unsaved changes.