Deep Generative Modeling with Spatial and Network Images: An Explainable AI (XAI) Approach
Monday, Aug 4: 8:50 AM - 9:05 AM
1834
Contributed Papers
Music City Center
In medical imaging studies, understanding associations among diverse image sets is key. This work proposes a generative model to predict task-based brain activation maps (t-fMRI) using spatially-varying cortical metrics (s-MRI) and brain connectivity networks (rs-fMRI). The model incorporates spatially-varying and network-valued inputs, with deep neural networks capturing non-linear network effects and spatially-varying regression coefficients. Key advantages include accounting for spatial smoothness, subject heterogeneity, and multi-scale associations, enabling accurate predictive inference. The model estimates predictor effects, quantifies uncertainty via Monte Carlo dropout, and introduces an Explainable AI (XAI) framework for heterogeneous image data. By treating image voxels as effective samples, it addresses sample size limitations and ensures scalability without extensive pre-processing. Comparative studies demonstrate its performance against statistical and deep learning methods.
Deep neural network
explainable artificial intelligence
Monte Carlo (MC) dropout
multimodal neuroimaging data
variational inference
Main Sponsor
Section on Statistics in Imaging
You have unsaved changes.