A general framework for probabilistic model uncertainty

Chris Holmes Co-Author
University of Oxford
 
Stephen Walker Co-Author
 
Vik Shirvaikar First Author
University of Oxford
 
Vik Shirvaikar Presenting Author
University of Oxford
 
Monday, Aug 4: 10:35 AM - 10:40 AM
2252 
Contributed Speed 
Music City Center 
Existing approaches to model uncertainty typically either compare models using a quantitative model selection criterion or evaluate posterior model probabilities having set a prior. In this paper, we propose an alternative strategy which views missing observations as the source of model uncertainty, where the true model would be identified with the complete data. To quantify model uncertainty, it is then necessary to provide a probability distribution for the missing observations conditional on what has been observed. This can be set sequentially using one-step-ahead predictive densities, which recursively sample from the best model according to some consistent model selection criterion. Repeated predictive sampling of the missing data, to give a complete dataset and hence a best model each time, provides our measure of model uncertainty. This approach bypasses the need for subjective prior specification or integration over parameter spaces, addressing issues with standard methods such as the Bayes factor. We provide illustrations from hypothesis testing, density estimation, and variable selection, demonstrating our approach on a range of standard problems.

Keywords

predictive inference

model uncertainty

hypothesis testing 

Main Sponsor

International Society for Bayesian Analysis (ISBA)