Meta-Uncertainty in Bayesian Model Comparison
Abstract
Bayesian model comparison (BMC) offers a principled probabilistic approach to study and rank competing models. In standard BMC, we construct a discrete probability distribution over the set of possible models, conditional on the observed data of interest. These posterior model probabilities (PMPs) are measures of uncertainty, but—when derived from a finite number of observations—are also uncertain themselves. In this paper, we conceptualize distinct levels of uncertainty which arise in BMC. We explore a fully probabilistic framework for quantifying meta-uncertainty, resulting in an applied method to enhance any BMC workflow. Drawing on both Bayesian and frequentist techniques, we represent the uncertainty over the uncertain PMPs via meta-models which combine simulated and observed data into a predictive distribution for PMPs on new data. We demonstrate the utility of the proposed method in the context of conjugate Bayesian regression, likelihood-based inference with Markov chain Monte Carlo, and simulation-based inference with neural networks.
Cite
Text
Schmitt et al. "Meta-Uncertainty in Bayesian Model Comparison." Artificial Intelligence and Statistics, 2023.Markdown
[Schmitt et al. "Meta-Uncertainty in Bayesian Model Comparison." Artificial Intelligence and Statistics, 2023.](https://mlanthology.org/aistats/2023/schmitt2023aistats-metauncertainty/)BibTeX
@inproceedings{schmitt2023aistats-metauncertainty,
title = {{Meta-Uncertainty in Bayesian Model Comparison}},
author = {Schmitt, Marvin and Radev, Stefan T. and Bürkner, Paul-Christian},
booktitle = {Artificial Intelligence and Statistics},
year = {2023},
pages = {11-29},
volume = {206},
url = {https://mlanthology.org/aistats/2023/schmitt2023aistats-metauncertainty/}
}