53: Supervised Variational Autoencoder with Mixture-of-Experts Prediction

Hongxiao Zhu Co-Author
Virginia Tech
 
Jaeyoung Lee First Author
Virginia Tech
 
Jaeyoung Lee Presenting Author
Virginia Tech
 
Monday, Aug 4: 2:00 PM - 3:50 PM
2102 
Contributed Posters 
Music City Center 
Large-scale datasets, such as images and texts, often exhibit complex heterogeneous structures caused by diverse data sources, intricate experimental designs, or latent subpopulations. Supervised learning from such data is challenging as it requires capturing relevant information from ultra-high-dimensional data while accounting for structural heterogeneity. We propose a unified framework that addresses both challenges simultaneously, facilitating effective feature extraction, structural learning, and robust prediction. The proposed framework employs a supervised variant of variational autoencoder (VAE) for both learning and prediction. Specifically, two types of latent variables are learned through the VAE: low-dimensional latent features and a latent stick-breaking process that characterizes the heterogeneous structure of samples. The latent features reduce the dimensionality of the input data, and the latent stick-breaking process serves as a gating function for mixture-of-experts prediction. This general framework reduces to a supervised VAE when the number of latent clusters is set to one, and to a stick-breaking VAE when both the latent features and response variables are omitted. We demonstrate advantages of the proposed framework by comparing it with supervised VAE and principal component regression in two simulation studies and a real data application involving brain tumor images.

Keywords

Variational Autoencoder

Data Heterogeneity

Stick-breaking Process

Supervised machine learning 

Abstracts


Main Sponsor

Biometrics Section