Variational Bayesian Last Layers
Abstract
We introduce a deterministic variational formulation for training Bayesian last layer neural networks. This yields a sampling-free, single-pass model and loss that effectively improves uncertainty estimation. Our variational Bayesian last layer (VBLL) can be trained and evaluated with only quadratic complexity in last layer width, and is thus (nearly) computationally free to add to standard architectures. We experimentally investigate VBLLs, and show that they improve predictive accuracy, calibration, and out of distribution detection over baselines across both regression and classification. Finally, we investigate combining VBLL layers with variational Bayesian feature learning, yielding a lower variance collapsed variational inference method for Bayesian neural networks.
Cite
Text
Harrison et al. "Variational Bayesian Last Layers." International Conference on Learning Representations, 2024.Markdown
[Harrison et al. "Variational Bayesian Last Layers." International Conference on Learning Representations, 2024.](https://mlanthology.org/iclr/2024/harrison2024iclr-variational/)BibTeX
@inproceedings{harrison2024iclr-variational,
title = {{Variational Bayesian Last Layers}},
author = {Harrison, James and Willes, John and Snoek, Jasper},
booktitle = {International Conference on Learning Representations},
year = {2024},
url = {https://mlanthology.org/iclr/2024/harrison2024iclr-variational/}
}