Confidence Scoring Using Whitebox Meta-Models with Linear Classifier Probes
Abstract
We propose a novel confidence scoring mechanism for deep neural networks based on a two-model paradigm involving a base model and a meta-model. The confidence score is learned by the meta-model observing the base model succeeding/failing at its task. As features to the meta-model, we investigate linear classifier probes inserted between the various layers of the base model. Our experiments demonstrate that this approach outperforms multiple baselines in a filtering task, i.e., task of rejecting samples with low confidence. Experimental results are presented using CIFAR-10 and CIFAR-100 dataset with and without added noise. We discuss the importance of confidence scoring to bridge the gap between experimental and real-world applications.
Cite
Text
Chen et al. "Confidence Scoring Using Whitebox Meta-Models with Linear Classifier Probes." Artificial Intelligence and Statistics, 2019.Markdown
[Chen et al. "Confidence Scoring Using Whitebox Meta-Models with Linear Classifier Probes." Artificial Intelligence and Statistics, 2019.](https://mlanthology.org/aistats/2019/chen2019aistats-confidence/)BibTeX
@inproceedings{chen2019aistats-confidence,
title = {{Confidence Scoring Using Whitebox Meta-Models with Linear Classifier Probes}},
author = {Chen, Tongfei and Navratil, Jiri and Iyengar, Vijay and Shanmugam, Karthikeyan},
booktitle = {Artificial Intelligence and Statistics},
year = {2019},
pages = {1467-1475},
volume = {89},
url = {https://mlanthology.org/aistats/2019/chen2019aistats-confidence/}
}