Single-Model Uncertainties for Deep Learning
Abstract
We provide single-model estimates of aleatoric and epistemic uncertainty for deep neural networks. To estimate aleatoric uncertainty, we propose Simultaneous Quantile Regression (SQR), a loss function to learn all the conditional quantiles of a given target variable. These quantiles can be used to compute well-calibrated prediction intervals. To estimate epistemic uncertainty, we propose Orthonormal Certificates (OCs), a collection of diverse non-constant functions that map all training samples to zero. These certificates map out-of-distribution examples to non-zero values, signaling epistemic uncertainty. Our uncertainty estimators are computationally attractive, as they do not require ensembling or retraining deep models, and achieve state-of-the-art performance.
Cite
Text
Tagasovska and Lopez-Paz. "Single-Model Uncertainties for Deep Learning." Neural Information Processing Systems, 2019.Markdown
[Tagasovska and Lopez-Paz. "Single-Model Uncertainties for Deep Learning." Neural Information Processing Systems, 2019.](https://mlanthology.org/neurips/2019/tagasovska2019neurips-singlemodel/)BibTeX
@inproceedings{tagasovska2019neurips-singlemodel,
title = {{Single-Model Uncertainties for Deep Learning}},
author = {Tagasovska, Natasa and Lopez-Paz, David},
booktitle = {Neural Information Processing Systems},
year = {2019},
pages = {6417-6428},
url = {https://mlanthology.org/neurips/2019/tagasovska2019neurips-singlemodel/}
}