Uncertainty Estimation Using a Single Deep Deterministic Neural Network

Abstract

We propose a method for training a deterministic deep model that can find and reject out of distribution data points at test time with a single forward pass. Our approach, deterministic uncertainty quantification (DUQ), builds upon ideas of RBF networks. We scale training in these with a novel loss function and centroid updating scheme and match the accuracy of softmax models. By enforcing detectability of changes in the input using a gradient penalty, we are able to reliably detect out of distribution data. Our uncertainty quantification scales well to large datasets, and using a single model, we improve upon or match Deep Ensembles in out of distribution detection on notable difficult dataset pairs such as FashionMNIST vs. MNIST, and CIFAR-10 vs. SVHN.

Cite

Text

Van Amersfoort et al. "Uncertainty Estimation Using a Single Deep Deterministic Neural Network." International Conference on Machine Learning, 2020.

Markdown

[Van Amersfoort et al. "Uncertainty Estimation Using a Single Deep Deterministic Neural Network." International Conference on Machine Learning, 2020.](https://mlanthology.org/icml/2020/vanamersfoort2020icml-uncertainty/)

BibTeX

@inproceedings{vanamersfoort2020icml-uncertainty,
  title     = {{Uncertainty Estimation Using a Single Deep Deterministic Neural Network}},
  author    = {Van Amersfoort, Joost and Smith, Lewis and Teh, Yee Whye and Gal, Yarin},
  booktitle = {International Conference on Machine Learning},
  year      = {2020},
  pages     = {9690-9700},
  volume    = {119},
  url       = {https://mlanthology.org/icml/2020/vanamersfoort2020icml-uncertainty/}
}