Density Uncertainty Layers for Reliable Uncertainty Estimation

Abstract

Assessing the predictive uncertainty of deep neural networks is crucial for safety-related applications of deep learning. Although Bayesian deep learning offers a principled framework for estimating model uncertainty, the common approaches that approximate the parameter posterior often fail to deliver reliable estimates of predictive uncertainty. In this paper, we propose a novel criterion for reliable predictive uncertainty: a model’s predictive variance should be grounded in the empirical density of the input. That is, the model should produce higher uncertainty for inputs that are improbable in the training data and lower uncertainty for inputs that are more probable. To operationalize this criterion, we develop the density uncertainty layer, a stochastic neural network architecture that satisfies the density uncertain criterion by design. We study density uncertainty layers on the UCI and CIFAR-10/100 uncertainty benchmarks. Compared to existing approaches, density uncertainty layers provide more reliable uncertainty estimates and robust out-of-distribution detection performance.

Cite

Text

Park and Blei. "Density Uncertainty Layers for Reliable Uncertainty Estimation." Artificial Intelligence and Statistics, 2024.

Markdown

[Park and Blei. "Density Uncertainty Layers for Reliable Uncertainty Estimation." Artificial Intelligence and Statistics, 2024.](https://mlanthology.org/aistats/2024/park2024aistats-density/)

BibTeX

@inproceedings{park2024aistats-density,
  title     = {{Density Uncertainty Layers for Reliable Uncertainty Estimation}},
  author    = {Park, Yookoon and Blei, David},
  booktitle = {Artificial Intelligence and Statistics},
  year      = {2024},
  pages     = {163-171},
  volume    = {238},
  url       = {https://mlanthology.org/aistats/2024/park2024aistats-density/}
}