SIAM/ASA Journal of Uncertainty Quantification Invited Paper Session

Bani Mallick Chair
Texas A&M University
 
Bani Mallick Organizer
Texas A&M University
 
Sunday, Aug 3: 4:00 PM - 5:50 PM
0264 
Invited Paper Session 
Music City Center 
Room: CC-212 

Keywords

Uncertainty Quantification

Likelihood free Inference

Subsampling and Boosting

Bayesian inference

Ensemble Kalman Filter

Generative Models 

Applied

Yes

Main Sponsor

Journal on Uncertainty Quantification

Co Sponsors

Statistical and Applied Mathematical Sciences Institute
Uncertainty Quantification in Complex Systems Interest Group

Presentations

Conditional Sampling with Monotone GANs: from Generative Models to Likelihood-Free Inference

We present an optimal transport framework for conditional sampling of probability measures. Conditional sampling is a fundamental task of solving Bayesian inverse problems and generative modeling. Optimal transport provides a flexible methodology to sample target distributions appearing in these problems by constructing a deterministic coupling that maps samples from a reference distribution (e.g., a standard Gaussian) to the desired target. To extend these tools for conditional sampling, we first develop the theoretical foundations of block triangular transport in a Banach space setting by drawing connections between monotone triangular maps and optimal transport. To learn these block triangular maps, I will then present a computational approach, called monotone generative adversarial networks (MGANs). Our algorithm uses only samples from the underlying joint probability measure and is hence likelihood-free, making it applicable to inverse problems where likelihood evaluations are inaccessible or computationally prohibitive. We will demonstrate the accuracy of MGAN for sampling the posterior distribution in Bayesian inverse problems involving ordinary and partial differential equations and for probabilistic image in-painting. 

Keywords

Optimal transport, conditional simulation, likelihood-free inference, generative models 

Speaker

Ricardo Baptista, California Institute of Technology

Sparse Bayesian inference with regularized Gaussian distributions

In this talk, we will present a family of sparsity promoting Bayesian hierarchical models based on combining Gaussian distributions with the deterministic effects of sparsity-promoting regularization like $l_1$ norms, total variation and/or constraints. Unlike Bayesian hierarchical models based on conditional continuous distributions, for example, conditional Gaussian distributions, using regularized Gaussian distributions results in sparse samples without needing large hierarchical models. We will show how to derive approximate Gibbs samplers for these hierarchical models and discuss advantages and disadvantages of the presented method with regard to theory, modeling and computation. 

Keywords

Bayesian inverse problems

sparsity 

Speaker

Jasper Everink, Technical University of Denmark

Subsampling of Parametric Models with Bifidelity Boosting

Least squares regression is a ubiquitous tool for building emulators of problems across science and engineering for purposes such as design space exploration and uncertainty quantification. When the regression data are generated using an experimental design process (e.g. a quadrature grid) involving computationally expensive models, or when the data size is large, sketching techniques have shown promise at reducing the cost of the construction of the regression model while ensuring accuracy comparable to that of the full data. However, random sketching strategies, such as those based on leverage scores, lead to regression errors that are random and may exhibit large variability. To mitigate this issue, we present a novel boosting approach that leverages cheaper, lower-fidelity data of the problem at hand to identify the best sketch among a set of candidate sketches. This in turn specifies the sketch of the intended high-fidelity model and the associated data. We provide theoretical analyses of this bifidelity boosting (BFB) approach and discuss the conditions the low- and high-fidelity data must satisfy for successful boosting. In doing so, we derive a bound on the residual norm of the BFB sketched solution relating it to its ideal, but computationally expensive, high-fidelity boosted counterpart. Empirical results on both manufactured and PDE data corroborate the theoretical analyses and illustrate the efficacy of the BFB solution in reducing the regression error, as compared to the nonboosted solution. 

Keywords

sketching

boosting

uncertainty quantification

multifidelity

least squares 

Speaker

Yiming Xu, University of Waterloo

Theoretical analysis of the Resampled Ensemble Kalman Filter

Filtering involves the real-time estimation of a dynamical system's state from incomplete and noisy observations. For high-dimensional systems, ensemble Kalman filters are often the preferred method. These filters use an ensemble of interacting particles to sequentially estimate the system's state as new observations come in. While ensemble Kalman filters are widely successful in practice, their theoretical analysis is complicated by the complex dependencies between particles. This presentation introduces ensemble Kalman filters that include an additional resampling step to break these dependencies. The resulting algorithm allows for a non-asymptotic, dimension-free theoretical analysis that improves and extends existing results for filters without resampling, while maintaining comparable performance in various numerical examples. 

Keywords

ensemble Kalman filter

effective dimension

nonasymptotic error bounds

data assimilation 

Speaker

Omar Al-Ghattas