Learning from peers: Evolutionary Stochastic Gradient Langevin Dynamic
Abstract Number:
2468
Submission Type:
Contributed Abstract
Contributed Abstract Type:
Paper
Participants:
Yushu Huang (1), Faming Liang (2)
Institutions:
(1) Purdue University, West Lafayette, IN, (2) Purdue University, West lafayette, IN
Co-Author:
First Author:
Presenting Author:
Abstract Text:
Though stochastic gradient Markov chain Monte Carlo (SGMCMC) algorithms are often used to solve non-convex learning problems, not many attempts have been made yet in developing a population SGMCMC algorithm. Such a population algorithm, involving a group of Markov chains, can improve mixing through interactions between different chains. In this paper, we propose an Evolutionary Stochastic Gradient Langevin Dynamic (ESGLD) algorithm: a population SGMCMC algorithm taking advantage of the evolutionary operators that have been proven powerful in overcoming local traps in Monte Carlo simulations with the Metropolis-Hastings algorithm. We prove the convergence of the ESGLD algorithm and demonstrate, through synthetic and real data experiments, that the ESGLD algorithm outperforms other SGMCMC algorithms in terms of the speed of convergence and effective sample generation.
Keywords:
evolutionary Monte Carlo|Stochastic gradient Langevin Dynamic|non-convex learning|population Markov chain Monte Carlo| local trap |
Sponsors:
Section on Statistical Computing
Tracks:
Monte Carlo Methods & Simulation
Can this be considered for alternate subtype?
Yes
Are you interested in volunteering to serve as a session chair?
No
I have read and understand that JSM participants must abide by the Participant Guidelines.
Yes
I understand that JSM participants must register and pay the appropriate registration fee by June 1, 2024. The registration fee is non-refundable.
I understand
You have unsaved changes.