Adapting the Linearised Laplace Model Evidence for Modern Deep Learning

Abstract

The linearised Laplace method for estimating model uncertainty has received renewed attention in the Bayesian deep learning community. The method provides reliable error bars and admits a closed-form expression for the model evidence, allowing for scalable selection of model hyperparameters. In this work, we examine the assumptions behind this method, particularly in conjunction with model selection. We show that these interact poorly with some now-standard tools of deep learning–stochastic approximation methods and normalisation layers–and make recommendations for how to better adapt this classic method to the modern setting. We provide theoretical support for our recommendations and validate them empirically on MLPs, classic CNNs, residual networks with and without normalisation layers, generative autoencoders and transformers.

Cite

Text

Antoran et al. "Adapting the Linearised Laplace Model Evidence for Modern Deep Learning." International Conference on Machine Learning, 2022.

Markdown

[Antoran et al. "Adapting the Linearised Laplace Model Evidence for Modern Deep Learning." International Conference on Machine Learning, 2022.](https://mlanthology.org/icml/2022/antoran2022icml-adapting/)

BibTeX

@inproceedings{antoran2022icml-adapting,
  title     = {{Adapting the Linearised Laplace Model Evidence for Modern Deep Learning}},
  author    = {Antoran, Javier and Janz, David and Allingham, James U and Daxberger, Erik and Barbano, Riccardo Rb and Nalisnick, Eric and Hernandez-Lobato, Jose Miguel},
  booktitle = {International Conference on Machine Learning},
  year      = {2022},
  pages     = {796-821},
  volume    = {162},
  url       = {https://mlanthology.org/icml/2022/antoran2022icml-adapting/}
}