Bayesian Model Assessment using NIMBLE
Thursday, Aug 7: 8:35 AM - 8:55 AM
Topic-Contributed Paper Session
Music City Center
Posterior predictive p-values (ppps) have become popular tools for Bayesian model assessment, being general-purpose and easy to use. However, interpretation can be difficult because their distribution is not uniform under the hypothesis that the model did generate the data. Calibrated ppps (cppps) can be obtained via a bootstrap-like procedure, yet remain unavailable in practice due to high computational cost. This work introduces methods to enable efficient approximation of cppps and their uncertainty for fast model assessment. We first investigate the computational tradeoff between the number of calibration replicates and the number of MCMC samples per replicate. Provided that the MCMC chain from the real data has converged, using short MCMC chains per calibration replicate can save significant computation time compared to naive implementations, without significant loss in accuracy. We propose different variance estimators for the cp approximation, which can be used to confirm the lack of evidence against model misspecification quickly. As variance estimation uses effective sample sizes of many short MCMC chains, we show these can be approximated well from the real-data MCMC chain. The procedure for cppp is implemented in NIMBLE, a flexible framework for hierarchical modeling that supports many models and discrepancy measures.
You have unsaved changes.