Why ReLU Networks Yield High-Confidence Predictions Far Away from the Training Data and How to Mitigate the Problem

Abstract

Classifiers used in the wild, in particular for safety-critical systems, should know when they don't know, in particular make low confidence predictions far away from the training data. We show that ReLU type neural networks fail in this regard as they produce almost always high confidence predictions far away from the training data. For bounded domains we propose a new robust optimization technique similar to adversarial training which enforces low confidence predictions far away from the training data. We show that this technique is surprisingly effective in reducing the confidence of predictions far away from the training data while maintaining high confidence predictions and test error on the original classification task compared to standard training. This is a short version of the corresponding CVPR paper.

Cite

Text

Hein et al. "Why ReLU Networks Yield High-Confidence Predictions Far Away from the Training Data and How to Mitigate the Problem." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2019.

Markdown

[Hein et al. "Why ReLU Networks Yield High-Confidence Predictions Far Away from the Training Data and How to Mitigate the Problem." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2019.](https://mlanthology.org/cvprw/2019/hein2019cvprw-relu/)

BibTeX

@inproceedings{hein2019cvprw-relu,
  title     = {{Why ReLU Networks Yield High-Confidence Predictions Far Away from the Training Data and How to Mitigate the Problem}},
  author    = {Hein, Matthias and Andriushchenko, Maksym and Bitterwolf, Julian},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2019},
  pages     = {58-74},
  url       = {https://mlanthology.org/cvprw/2019/hein2019cvprw-relu/}
}