Bayesian Neural Network Priors Revisited

Abstract

Isotropic Gaussian priors are the de facto standard for modern Bayesian neural network inference. However, it is unclear whether these priors accurately reflect our true beliefs about the weight distributions or give optimal performance. To find better priors, we study summary statistics of neural network weights in networks trained using stochastic gradient descent (SGD). We find that convolutional neural network (CNN) and ResNet weights display strong spatial correlations, while fully connected networks (FCNNs) display heavy-tailed weight distributions. We show that building these observations into priors can lead to improved performance on a variety of image classification datasets. Surprisingly, these priors mitigate the cold posterior effect in FCNNs, but slightly increase the cold posterior effect in ResNets.

Cite

Text

Fortuin et al. "Bayesian Neural Network Priors Revisited." International Conference on Learning Representations, 2022.

Markdown

[Fortuin et al. "Bayesian Neural Network Priors Revisited." International Conference on Learning Representations, 2022.](https://mlanthology.org/iclr/2022/fortuin2022iclr-bayesian/)

BibTeX

@inproceedings{fortuin2022iclr-bayesian,
  title     = {{Bayesian Neural Network Priors Revisited}},
  author    = {Fortuin, Vincent and Garriga-Alonso, Adrià and Ober, Sebastian W. and Wenzel, Florian and Ratsch, Gunnar and Turner, Richard E and van der Wilk, Mark and Aitchison, Laurence},
  booktitle = {International Conference on Learning Representations},
  year      = {2022},
  url       = {https://mlanthology.org/iclr/2022/fortuin2022iclr-bayesian/}
}