DOCTOR: A Simple Method for Detecting Misclassification Errors

Abstract

Deep neural networks (DNNs) have shown to perform very well on large scale object recognition problems and lead to widespread use for real-world applications, including situations where DNN are implemented as “black boxes”. A promising approach to secure their use is to accept decisions that are likely to be correct while discarding the others. In this work, we propose DOCTOR, a simple method that aims to identify whether the prediction of a DNN classifier should (or should not) be trusted so that, consequently, it would be possible to accept it or to reject it. Two scenarios are investigated: Totally Black Box (TBB) where only the soft-predictions are available and Partially Black Box (PBB) where gradient-propagation to perform input pre-processing is allowed. Empirically, we show that DOCTOR outperforms all state-of-the-art methods on various well-known images and sentiment analysis datasets. In particular, we observe a reduction of up to 4% of the false rejection rate (FRR) in the PBB scenario. DOCTOR can be applied to any pre-trained model, it does not require prior information about the underlying dataset and is as simple as the simplest available methods in the literature.

Cite

Text

Granese et al. "DOCTOR: A Simple Method for Detecting Misclassification Errors." Neural Information Processing Systems, 2021.

Markdown

[Granese et al. "DOCTOR: A Simple Method for Detecting Misclassification Errors." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/granese2021neurips-doctor/)

BibTeX

@inproceedings{granese2021neurips-doctor,
  title     = {{DOCTOR: A Simple Method for Detecting Misclassification Errors}},
  author    = {Granese, Federica and Romanelli, Marco and Gorla, Daniele and Palamidessi, Catuscia and Piantanida, Pablo},
  booktitle = {Neural Information Processing Systems},
  year      = {2021},
  url       = {https://mlanthology.org/neurips/2021/granese2021neurips-doctor/}
}