OpenMix: Exploring Outlier Samples for Misclassification Detection

Abstract

Reliable confidence estimation for deep neural classifiers is a challenging yet fundamental requirement in high-stakes applications. Unfortunately, modern deep neural networks are often overconfident for their erroneous predictions. In this work, we exploit the easily available outlier samples, i.e., unlabeled samples coming from non-target classes, for helping detect misclassification errors. Particularly, we find that the well-known Outlier Exposure, which is powerful in detecting out-of-distribution (OOD) samples from unknown classes, does not provide any gain in identifying misclassification errors. Based on these observations, we propose a novel method called OpenMix, which incorporates open-world knowledge by learning to reject uncertain pseudo-samples generated via outlier transformation. OpenMix significantly improves confidence reliability under various scenarios, establishing a strong and unified framework for detecting both misclassified samples from known classes and OOD samples from unknown classes.

Cite

Text

Zhu et al. "OpenMix: Exploring Outlier Samples for Misclassification Detection." Conference on Computer Vision and Pattern Recognition, 2023. doi:10.1109/CVPR52729.2023.01162

Markdown

[Zhu et al. "OpenMix: Exploring Outlier Samples for Misclassification Detection." Conference on Computer Vision and Pattern Recognition, 2023.](https://mlanthology.org/cvpr/2023/zhu2023cvpr-openmix/) doi:10.1109/CVPR52729.2023.01162

BibTeX

@inproceedings{zhu2023cvpr-openmix,
  title     = {{OpenMix: Exploring Outlier Samples for Misclassification Detection}},
  author    = {Zhu, Fei and Cheng, Zhen and Zhang, Xu-Yao and Liu, Cheng-Lin},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2023},
  pages     = {12074-12083},
  doi       = {10.1109/CVPR52729.2023.01162},
  url       = {https://mlanthology.org/cvpr/2023/zhu2023cvpr-openmix/}
}