Dangers of Bayesian Model Averaging Under Covariate Shift

Abstract

Approximate Bayesian inference for neural networks is considered a robust alternative to standard training, often providing good performance on out-of-distribution data. However, Bayesian neural networks (BNNs) with high-fidelity approximate inference via full-batch Hamiltonian Monte Carlo achieve poor generalization under covariate shift, even underperforming classical estimation. We explain this surprising result, showing how a Bayesian model average can in fact be problematic under covariate shift, particularly in cases where linear dependencies in the input features cause a lack of posterior contraction. We additionally show why the same issue does not affect many approximate inference procedures, or classical maximum a-posteriori (MAP) training. Finally, we propose novel priors that improve the robustness of BNNs to many sources of covariate shift.

Cite

Text

Izmailov et al. "Dangers of Bayesian Model Averaging Under Covariate Shift." Neural Information Processing Systems, 2021.

Markdown

[Izmailov et al. "Dangers of Bayesian Model Averaging Under Covariate Shift." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/izmailov2021neurips-dangers/)

BibTeX

@inproceedings{izmailov2021neurips-dangers,
  title     = {{Dangers of Bayesian Model Averaging Under Covariate Shift}},
  author    = {Izmailov, Pavel and Nicholson, Patrick and Lotfi, Sanae and Wilson, Andrew G},
  booktitle = {Neural Information Processing Systems},
  year      = {2021},
  url       = {https://mlanthology.org/neurips/2021/izmailov2021neurips-dangers/}
}