Section on Statistics in Imaging Contributed Session 2

Dingke Tang Chair
 
Monday, Aug 4: 8:30 AM - 10:20 AM
4040 
Contributed Papers 
Music City Center 
Room: CC-102A 

Main Sponsor

Section on Statistics in Imaging

Presentations

Bayesian Tensor Modeling for Dimension Reduction and Variable Selection in Neuroimaging Data

The rapid growth of high-dimensional neuroimaging data demands advanced statistical models that extract meaningful features while handling sparsity and structural dependencies. In this talk, we introduce a Bayesian tensor regression framework for matrix- and tensor-variate neuroimaging models with mixed-type responses, such as disease status and clinical measures. Our approach employs global-local shrinkage priors to enforce sparsity and low-rank structure, efficiently capturing dependencies among imaging predictors. Using a data augmentation strategy, we enable computationally efficient posterior inference via Gibbs sampling. Applied to Alzheimer's Disease MRI data, our model identifies key imaging biomarkers linked to disease progression. We establish posterior consistency in high-dimensional settings and validate robustness through simulations. Extending our framework with hierarchical priors enhances interpretability and scalability. This method offers a flexible solution for structured low-rank regression with applications in neurodegenerative disease research, functional connectivity analysis, and precision medicine. 

Keywords

Bayesian Tensor Regression

Neuroimaging Data Analysis

Dimension Reduction

Variable Selection

Global-Local Shrinkage Priors

Low-Rank Structure 

First Author

Hsin-Hsiung Huang, University of Central Florida

Presenting Author

Hsin-Hsiung Huang, University of Central Florida

Deep Generative Modeling with Spatial and Network Images: An Explainable AI (XAI) Approach

In medical imaging studies, understanding associations among diverse image sets is key. This work proposes a generative model to predict task-based brain activation maps (t-fMRI) using spatially-varying cortical metrics (s-MRI) and brain connectivity networks (rs-fMRI). The model incorporates spatially-varying and network-valued inputs, with deep neural networks capturing non-linear network effects and spatially-varying regression coefficients. Key advantages include accounting for spatial smoothness, subject heterogeneity, and multi-scale associations, enabling accurate predictive inference. The model estimates predictor effects, quantifies uncertainty via Monte Carlo dropout, and introduces an Explainable AI (XAI) framework for heterogeneous image data. By treating image voxels as effective samples, it addresses sample size limitations and ensures scalability without extensive pre-processing. Comparative studies demonstrate its performance against statistical and deep learning methods. 

Keywords

Deep neural network

explainable artificial intelligence

Monte Carlo (MC) dropout

multimodal neuroimaging data

variational inference 

Co-Author(s)

Rajarshi Guhaniyogi, Texas A&M University
Aaron Scheffler, University of California-San Francisco

First Author

Yeseul Jeon, Texas A&M University

Presenting Author

Yeseul Jeon, Texas A&M University

Distributionally Accurate FMRI Phase Activation

In fMRI, voxel time series are complex-valued after image reconstruction due to magnetic fields inhomogeneities and a lack of k-space Hermitian symmetry. It is well-known that real and imaginary parts of k-space measurements from ADCs are normally distributed and because the IDFT is linear, real and imaginary parts of voxel measurements are normally distributed. A transformation from real and imaginary Cartesian coordinates to magnitude and phase along with marginalizing out the magnitude results in a non-normal unfriendly distribution for the phase. A large SNR assumption is generally made on the phase or it is approximated for use. Here, the exact distribution of the phase will be used for detecting task activation within phase time series and compared to the large SNR normal approximation. 

Keywords

FMRI

Phase

Activation 

First Author

Dan Rowe, Marquette University

Presenting Author

Dan Rowe, Marquette University

Nested Hypothesis Tests for Discovering Separability Structures in Multivariate Functional Data

Notions of separability have frequently been utilized for tensor data, including random matrices and spatiotemporal data. Multivariate functional data in which the components share a common domain, such as regional BOLD signals in fMRI studies, constitute another important example. Separability of the covariance is a common structural assumption that leads to simplified computation and analysis. In recent years, two generalizations of separability have been proposed, namely weak and partial separability, where the latter is a further generalization of the former. This talk will outline a nonparametric nested testing procedure to aid in choosing one of these separability structures (or none at all) for a given data set. The tests for separability and weak separability are based on existing tests in the literature, while a novel test is proposed for assessing partial separability. Null distributions of the relevant test statistics are approximated via bootstrapping. Theoretical properties will be presented, along with an illustrative analysis on fMRI scans during a motor task. 

Keywords

Multivariate Functional Data

Separable Covariance

Nested Hypothesis Testing 

Co-Author(s)

Andrew Pope, Brigham Young University
Garritt Page, Brigham Young University

First Author

Alexander Petersen, Brigham Young University

Presenting Author

Alexander Petersen, Brigham Young University

Scale Mixtures of Complex Gaussian and Bayesian Shrinkage

Complex-valued distributions are widely used in fields such as signal processing and neuroimaging, where magnetic resonance imaging (MRI) and functional MRI (fMRI) data are inherently complex-valued due to phase imperfections. Leveraging full complex-valued data improves statistical power, inference, and prediction compared to using only magnitude or real-valued subsets. This paper extends scale mixtures of Gaussians to the complex domain, deriving the most general complex-valued versions of Student-t, Laplace, and GDP distributions and their real-valued equivalents as special cases. We apply these distributions as shrinkage priors in complex Bayesian regression, developing novel MCMC algorithms that estimate correlations between real and imaginary components. Simulations and fMRI data demonstrate that complex-valued shrinkage priors enhance variable selection, coefficient estimation, and predictive accuracy, particularly when real and imaginary parts are highly correlated. The R package cplxrv provides tools for simulating complex variables and implementing the proposed MCMC methods. 

Keywords

complex Gaussian distribution

scale mixtures of Gaussians

shrinkage

Bayesian regression

variable selection

fMRI 

Co-Author

Qishi Zhan, Marquette University

First Author

Cheng-Han Yu, Department of Mathematical and Statistical Sciences, Marquette University

Presenting Author

Cheng-Han Yu, Department of Mathematical and Statistical Sciences, Marquette University

Semiparametric Correlation Estimation in Multivariate BWAS

Multivariate brain-wide association studies (BWAS) use machine learning (ML) models to predict phenotypes from high-dimensional brain imaging. For continuous predicted features, Pearson's correlation between the predicted feature and actual feature is often used to quantify model accuracy in test data; however, the parameter this is meant to estimate is not explicit. We rigorously define multiple parameters and show that the standard Pearson estimator is biased for the typical parameter of interest in multivariate BWAS studies. Using flexible ML models affects the rate of convergence to the true parameter, and the sample size needed to converge is often larger than existing neuroimaging datasets. Additionally, the typical Fisher confidence intervals for Pearson's correlation undercover. We use semiparametric theory to present a new estimator based on the efficient influence function of the target parameter. This estimator converges to the parameter in reasonable sample sizes and admits a confidence interval procedure that achieves nominal or near-nominal coverage. We show how researchers can provide estimates, confidence intervals, and p-values (without the need for permutation testing) for model accuracy.  

Keywords

machine learning

brain-wide association studies

correlation

semiparametric

prediction

neuroimaging 

Co-Author(s)

Ishaan Gadiyar, Vanderbilt University Medical Center
Xinyu Zhang
Kaidi Kang, Vanderbilt University
Edward Kennedy
Aaron Alexander-Bloch, Department of Psychiatry, University of Pennsylvania
Jakob Seidlitz, Department of Psychiatry, University of Pennsylvania, Philadelphia, PA, USA
Simon Vandekar, Vanderbilt University

First Author

Megan Jones

Presenting Author

Megan Jones