Relaxed χ^2-Divergence Gradient Flow
Tuesday, Aug 5: 11:35 AM - 11:50 AM
2789
Contributed Papers
Music City Center
Transporting samples from a source to a target distribution, given only finite samples from both, is a fundamental problem in machine learning, with applications in generative modeling and variational inference. We address this problem by approximating a discretized gradient flow of the MMD-regularized $\chi^2$-divergence between the evolving source and the fixed target distribution. We provide non-asymptotic error bounds for (i) optimization error (measuring convergence to the target distribution), (ii) sampling error (from finite to infinite sample size), and (iii) approximation error (due to regularization), with particular attention to their dependence on dimensionality. Our minimization scheme admits closed-form updates and employs a data-adaptive annealed regularization strategy to maximize descent. Experiments on tabular and vision datasets demonstrate the effectiveness of our approach.
gradient flows
convex analysis
$\chi^2$-divergence
generative modeling
Wasserstein space
Main Sponsor
Section on Statistical Learning and Data Science
You have unsaved changes.