An Empirical Analysis of the Advantages of Finite V.s. Infinite Width Bayesian Neural Networks

Abstract

Comparing Bayesian neural networks (BNNs) with different widths is challenging because, as the width increases, multiple model properties change simultaneously, and, inference in the finite-width case is intractable. In this work, we empirically compare finite- and infinite-width BNNs, and provide quantitative and qualitative explanations for their performance difference. We find that when the model is mis-specified, increasing width can hurt BNN performance. In these cases, we provide evidence that finite-width BNNs generalize better partially due to the properties of their frequency spectrum that allows them to adapt under model mismatch.

Cite

Text

Yao et al. "An Empirical Analysis of the Advantages of Finite V.s. Infinite Width Bayesian Neural Networks." NeurIPS 2022 Workshops: ICBINB, 2022.

Markdown

[Yao et al. "An Empirical Analysis of the Advantages of Finite V.s. Infinite Width Bayesian Neural Networks." NeurIPS 2022 Workshops: ICBINB, 2022.](https://mlanthology.org/neuripsw/2022/yao2022neuripsw-empirical/)

BibTeX

@inproceedings{yao2022neuripsw-empirical,
  title     = {{An Empirical Analysis of the Advantages of Finite V.s. Infinite Width Bayesian Neural Networks}},
  author    = {Yao, Jiayu and Yacoby, Yaniv and Coker, Beau and Pan, Weiwei and Doshi-Velez, Finale},
  booktitle = {NeurIPS 2022 Workshops: ICBINB},
  year      = {2022},
  url       = {https://mlanthology.org/neuripsw/2022/yao2022neuripsw-empirical/}
}