Contributed Poster Presentations: Uncertainty Quantification in Complex Systems Interest Group

Ryan Peterson Chair
University of Colorado - Anschutz Medical Campus
 
Wednesday, Aug 7: 10:30 AM - 12:20 PM
6082 
Contributed Posters 
Oregon Convention Center 
Room: CC-Hall CD 

Main Sponsor

Uncertainty Quantification in Complex Systems Interest Group

Presentations

60 Accounting for stochastic gating whilst estimating ion channel kinetics from whole-cell voltage-clamp data

The heartbeat is coordinated by ion-channels in cell membranes that change their conformation, a process known as gating, allowing ions to pass through them. Mathematical models of cardiac ion channels can be defined as biochemical reactions describing the transitions between the ion channel configurations. Whole-cell voltage-clamp data allows us to calibrate the parameters of such mathematical models. However, standard approaches do not distinguish between stochastic noise and measurement errors, and the resulting estimates can be biased. To overcome these limitations, we propose a state-space model including a set of Itô-type stochastic differential equations describing ion channel gating, coupled with an Ohmic equation linking the noisy measurements to the ion channel configurations. We developed an inference procedure to estimate the unknown parameters, based on maximum likelihood. Synthetic studies show that our proposed method can infer the unknown parameters with low uncertainty. These results will improve models of ion channel dynamics by accounting for stochastic gating and measurement errors during fitting. 

Keywords

State-space models

expectation-maximization

parameter inference

ion channels

cardiac electrophysiology

uncertainty quantification 

Abstracts


First Author

Luca Del Core

Presenting Author

Luca Del Core

61 Anytime-Valid Generalized Universal Inference on Risk Minimizers

A common goal in statistics and machine learning is estimation of unknowns. Point estimates alone are of little value without an accompanying measure of uncertainty, but traditional uncertainty quantification methods, such as confidence sets and p-values, often require strong distributional or structural assumptions that may not be justified in modern problems. The present paper considers a very common case in machine learning, where the quantity of interest is the minimizer of a given risk (expected loss) function. For such cases, we propose a generalized universal procedure for inference on risk minimizers that features a finite-sample, frequentist validity property under mild distributional assumptions. One version of the proposed procedure is shown to be anytime-valid in the sense that it maintains validity properties regardless of the stopping rule used for the data collection process. We show how this anytime-validity property offers protection against certain factors contributing to the replication crisis in science. 

Keywords

e-process

e-value

empirical risk minimization

Gibbs posterior

learning rate

machine learning 

Abstracts


Co-Author(s)

Ryan Martin
Jonathan Williams, North Carolina State University

First Author

Neil Dey

Presenting Author

Neil Dey

62 Deep Gaussian Processes for Uncertainty Quantification in Large-Data Classification Settings

Many applications of experimental design produce categorical response data. Gaussian Processes (GPs) are stochastic models that provide flexible fitting of response surfaces, but must be modified to handle non-Gaussian likelihoods. Performing fully Bayesian estimation of a GP classifier requires directly sampling from a latent layer, which involves the inversion of covariance matrices; this can be computationally infeasible in large-data regimes. The Vecchia approximation can reduce the cost of inverting covariance matrices by inducing sparse Cholesky decompositions. By combining this with the Elliptical Slice Sampling (ESS) algorithm for generating valid posterior samples from a latent layer, we obtain a tractable, fully Bayesian approach to fitting and predicting from a global GP classification model in large-data settings. We apply our methods to a Binary Black Hole (BBH) simulator example, which contains both binary and real-valued components in its response. Our method of combining fully Bayesian classification and regression provides us full Uncertainty Quantification (UQ) estimation of BBH formation and chirp mass. 

Keywords

Computer Experiments

Categorical Data

Vecchia Approximation

Black Hole Simulation

Elliptical Slice Sampling 

Abstracts


Co-Author(s)

Annie Booth, NC State University
Robert Gramacy, Virginia Tech

First Author

Andrew Cooper

Presenting Author

Andrew Cooper

63 Probabilistic forecast of nonlinear dynamical systems with uncertainty quantification

Data-driven modeling is useful for reconstructing nonlinear dynamical systems when the underlying process is unknown or too expensive to compute. In this work, we first extend parallel partial Gaussian processes to predict the vector-valued transition function and quantify uncertainty of predictions by posterior sampling. Second, we show the equivalence between dynamic mode decomposition (DMD) and the maximum likelihood estimator of the transition matrix in the linear state space model, offering a probabilistic generative model for DMD and enabling uncertainty quantification. For systems containing noises, the lack of noise term in DMD prohibits reliable estimation of the dimensions and transition matrix. We integrate Kalman Filter into a fast expectation-maximization (E-M) algorithm for reducing the computation order and no additional numerical optimization is required in each step of the E-M algorithm. We study two examples in climate science and simulating quantum many-body systems far from equilibrium. The examples indicate that uncertainty of forecast can be properly quantified, whereas model or input misspecification can degrade the accuracy of uncertainty quantification. 

Keywords

Bayesian priors

Dynamic mode decomposition

Forecast

Gaussian processes

Noisy systems

Uncertainty quantification 

Abstracts


Co-Author(s)

Mengyang Gu, University of California-Santa Barbara
Victor Chang Lee, Yale University
Diana Qiu, Yale University

First Author

Yizi Lin

Presenting Author

Yizi Lin

64 Reliable emulation of complex functionals by active learning with error control

A statistical emulator can be used as a surrogate of complex physics-based calculations to drastically reduce the computational cost. Its effectiveness relies on accurately representing nonlinear response surfaces in high-dimensional input spaces. Traditional "space-filling" designs like random and Latin hypercube sampling lose efficiency with increased input dimensionality, impacting emulator accuracy. To overcome this issue, we introduce Active Learning with Error Control (ALEC) for reliably predicting complex functionals. ALEC is applicable to emulating expensive computer models with infinite-dimensional inputs, ensuring high-fidelity predictions with controlled errors. We derived a criterion to ensure that the fraction of samples with predictive errors larger than a threshold is small and develop an iterative algorithm to reduce the computational cost. We demonstrate the accuracy of ALEC by emulating classical density functional theory (cDFT) calculations, crucial in simulating thermodynamic properties of fluids. ALEC outperforms Gaussian process emulators with conventional designs and active learning methods with other criterion in accuracy and computational efficiency. 

Keywords

Active learning

Computational model emulation

Error control

Gaussian processes

High-dimensional input 

Abstracts


Co-Author(s)

Mengyang Gu, University of California-Santa Barbara
Jianzhong Wu, University of California, Riverside

First Author

Xinyi Fang

Presenting Author

Xinyi Fang