Is Bayesian Model Selection Aligned with Model Generalization?
Thursday, Aug 8: 9:05 AM - 9:20 AM
Invited Paper Session
Oregon Convention Center
How do we compare between hypotheses that are entirely consistent with observations? The marginal likelihood (aka Bayesian evidence), which represents the probability of generating our observations from a prior, provides a distinctive approach to this foundational question, automatically encoding Occam's razor. Although it has been observed that the marginal likelihood can overfit and is sensitive to prior assumptions, its limitations for hyperparameter learning and discrete model comparison have not been thoroughly investigated. We first revisit the appealing properties of the marginal likelihood for learning constraints and hypothesis testing.
You have unsaved changes.