A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks
Abstract
We consider the two related problems of detecting if an example is misclassified or out-of-distribution. We present a simple baseline that utilizes probabilities from softmax distributions. Correctly classified examples tend to have greater maximum softmax probabilities than erroneously classified and out-of-distribution examples, allowing for their detection. We assess performance by defining several tasks in computer vision, natural language processing, and automatic speech recognition, showing the effectiveness of this baseline across all. We then show the baseline can sometimes be surpassed, demonstrating the room for future research on these underexplored detection tasks.
Cite
Text
Hendrycks and Gimpel. "A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks." International Conference on Learning Representations, 2017.Markdown
[Hendrycks and Gimpel. "A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks." International Conference on Learning Representations, 2017.](https://mlanthology.org/iclr/2017/hendrycks2017iclr-baseline/)BibTeX
@inproceedings{hendrycks2017iclr-baseline,
title = {{A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks}},
author = {Hendrycks, Dan and Gimpel, Kevin},
booktitle = {International Conference on Learning Representations},
year = {2017},
url = {https://mlanthology.org/iclr/2017/hendrycks2017iclr-baseline/}
}