On Local Posterior Structure in Deep Ensembles
Abstract
Bayesian Neural Networks (BNNs) often improve model calibration and predictive uncertainty quantification compared to point estimators such as maximum-a-posteriori (MAP). Similarly, deep ensembles (DEs) are also known to improve calibration, and therefore, it is natural to hypothesize that deep ensembles of BNNs (DE-BNNs) should provide even further improvements. In this work, we systematically investigate this across a number of datasets, neural network architectures, and BNN approximation methods and surprisingly find that when the ensembles grow large enough, DEs consistently outperform DE-BNNs on in-distribution data. To shine light on this observation, we conduct several sensitivity and ablation studies. Moreover, we show that even though DE-BNNs outperform DEs on out-of-distribution metrics, this comes at the cost of decreased in-distribution performance. As a final contribution, we open-source the large pool of trained models to facilitate further research on this topic.
Cite
Text
Jordahn et al. "On Local Posterior Structure in Deep Ensembles." Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, 2025.Markdown
[Jordahn et al. "On Local Posterior Structure in Deep Ensembles." Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, 2025.](https://mlanthology.org/aistats/2025/jordahn2025aistats-local/)BibTeX
@inproceedings{jordahn2025aistats-local,
title = {{On Local Posterior Structure in Deep Ensembles}},
author = {Jordahn, Mikkel and Jensen, Jonas Vestergaard and Schmidt, Mikkel N. and Andersen, Michael Riis},
booktitle = {Proceedings of The 28th International Conference on Artificial Intelligence and Statistics},
year = {2025},
pages = {5032-5040},
volume = {258},
url = {https://mlanthology.org/aistats/2025/jordahn2025aistats-local/}
}