Data Augmentation in Bayesian Neural Networks and the Cold Posterior Effect

Abstract

Bayesian neural networks that incorporate data augmentation implicitly use a “randomly perturbed log-likelihood [which] does not have a clean interpretation as a valid likelihood function” (Izmailov et al. 2021). Here, we provide several approaches to developing principled Bayesian neural networks incorporating data augmentation. We introduce a “finite orbit” setting which allows valid likelihoods to be computed exactly, and for the more usual “full orbit” setting we derive multi-sample bounds tighter than those used previously. These models cast light on the origin of the cold posterior effect. In particular, we find that the cold posterior effect persists even in these principled models incorporating data augmentation. This suggests that the cold posterior effect cannot be dismissed as an artifact of data augmentation using incorrect likelihoods.

Cite

Text

Nabarro et al. "Data Augmentation in Bayesian Neural Networks and the Cold Posterior Effect." Uncertainty in Artificial Intelligence, 2022.

Markdown

[Nabarro et al. "Data Augmentation in Bayesian Neural Networks and the Cold Posterior Effect." Uncertainty in Artificial Intelligence, 2022.](https://mlanthology.org/uai/2022/nabarro2022uai-data/)

BibTeX

@inproceedings{nabarro2022uai-data,
  title     = {{Data Augmentation in Bayesian Neural Networks and the Cold Posterior Effect}},
  author    = {Nabarro, Seth and Ganev, Stoil and Garriga-Alonso, Adrià and Fortuin, Vincent and Wilk, Mark and Aitchison, Laurence},
  booktitle = {Uncertainty in Artificial Intelligence},
  year      = {2022},
  pages     = {1434-1444},
  volume    = {180},
  url       = {https://mlanthology.org/uai/2022/nabarro2022uai-data/}
}