Deeply Learned Generalized Linear Models with Missing Data

Joseph Ibrahim Co-Author
University of North Carolina
 
Naim Rashid Speaker
UNC Chapel Hill
 
Tuesday, Aug 5: 2:05 PM - 2:30 PM
Invited Paper Session 
Music City Center 
Deep Learning (DL) methods have dramatically increased in popularity in recent years, with significant growth in their application to various supervised learning problems. However, the greater prevalence and complexity of missing data in modern datasets present significant challenges for DL methods. Here, we provide a formal treatment of missing data in the context of deeply learned generalized linear models, a supervised DL architecture for regression and classification problems. We propose a new architecture, dlglm, that is one of the first to be able to flexibly account for both ignorable and non-ignorable patterns of missingness in input features and response at training time. We demonstrate through statistical simulation that our method outperforms existing approaches for supervised learning tasks in the presence of missing not at random (MNAR) missingness. We conclude with a case study of the Bank Marketing dataset from the UCI Machine Learning Repository, in which we predict whether clients subscribed to a product based on phone survey data.

Keywords

Deep Learning

Missing Data

Variational Inference

Supervised Learning

MNAR

Generalized Linear Models