38: On the Statistical Capacity of Deep Generative Models
Edric Tam
Presenting Author
Stanford University
Monday, Aug 4: 2:00 PM - 3:50 PM
1097
Contributed Posters
Music City Center
Deep generative models are routinely used in generating samples from complex, highdimensional distributions. Despite their apparent successes, their statistical properties are not
well understood. A common assumption is that with enough training data and sufficiently large
neural networks, deep generative model samples will have arbitrarily small errors in sampling
from any continuous target distribution. We set up a unifying framework that debunks this belief.
We demonstrate that broad classes of deep generative models, including variational autoencoders
and generative adversarial networks, are not universal generators. Under the predominant case of
Gaussian latent variables, these models can only generate concentrated samples that exhibit light
tails. Using tools from concentration of measure and convex geometry, we give analogous results
for more general log-concave and strongly log-concave latent variable distributions. We extend
our results to diffusion models via a reduction argument. We use the Gromov–Levy inequality to
give similar guarantees when the latent variables lie on manifolds with positive Ricci curvature.
These results shed light on the limited capacity
Deep Generative Models
Diffusion models
Generative Adversarial Networks
Variational Autoencoders
Concentration of Measure
Main Sponsor
Section on Statistical Learning and Data Science
You have unsaved changes.