Parallelly Tempered Generative Adversarial Networks

Qifan Song Co-Author
 
Jinwon Sohn First Author
Purdue University
 
Jinwon Sohn Presenting Author
Purdue University
 
Tuesday, Aug 5: 11:20 AM - 11:35 AM
1248 
Contributed Papers 
Music City Center 
A generative adversarial network (GAN) has become a cornerstone of generative AI for its ability to model complex data-generating processes. However, GAN training is notoriously unstable, often suffering from mode collapse. This work analyzes training instability through the variance of gradients, linking it to multimodality in the target distribution. To address these issues, we propose a novel GAN training framework that uses tempered distributions via convex interpolation. With a new GAN objective, the generator learns all tempered distributions simultaneously, akin to parallel tempering in statistics. Simulations demonstrate the superiority of our method over existing strategies in synthesizing image and tabular data. We theoretically show that this improvement stems from reduced gradient variance using tempered distributions. Additionally, we develop a variant of our framework to generate fair synthetic data, addressing a growing concern in trustworthy AI.

Keywords

Generative Adversarial Network

Parallel Tempering

Fair Data Generation


Variance Reduction of Gradients 

Main Sponsor

Section on Statistical Learning and Data Science