Learning from peers: Evolutionary Stochastic Gradient Langevin Dynamic

Faming Liang Co-Author
Purdue University
 
Yushu Huang First Author
 
Yushu Huang Presenting Author
 
Wednesday, Aug 7: 9:05 AM - 9:20 AM
2468 
Contributed Papers 
Oregon Convention Center 
Though stochastic gradient Markov chain Monte Carlo (SGMCMC) algorithms are often used to solve non-convex learning problems, not many attempts have been made yet in developing a population SGMCMC algorithm. Such a population algorithm, involving a group of Markov chains, can improve mixing through interactions between different chains. In this paper, we propose an Evolutionary Stochastic Gradient Langevin Dynamic (ESGLD) algorithm: a population SGMCMC algorithm taking advantage of the evolutionary operators that have been proven powerful in overcoming local traps in Monte Carlo simulations with the Metropolis-Hastings algorithm. We prove the convergence of the ESGLD algorithm and demonstrate, through synthetic and real data experiments, that the ESGLD algorithm outperforms other SGMCMC algorithms in terms of the speed of convergence and effective sample generation.

Keywords

evolutionary Monte Carlo

Stochastic gradient Langevin Dynamic

non-convex learning

population Markov chain Monte Carlo

local trap 

Main Sponsor

Section on Statistical Computing