Confidence Estimation Using Unlabeled Data
Abstract
Overconfidence is a common issue for deep neural networks, limiting their deployment in real-world applications. To better estimate confidence, existing methods mostly focus on fully-supervised scenarios and rely on training labels. In this paper, we propose the first confidence estimation method for a semi-supervised setting, when most training labels are unavailable. We stipulate that even with limited training labels, we can still reasonably approximate the confidence of model on unlabeled samples by inspecting the prediction consistency through the training process. We use training consistency as a surrogate function and propose a consistency ranking loss for confidence estimation. On both image classification and segmentation tasks, our method achieves state-of-the-art performances in confidence estimation. Furthermore, we show the benefit of the proposed method through a downstream active learning task.
Cite
Text
Li et al. "Confidence Estimation Using Unlabeled Data." International Conference on Learning Representations, 2023.Markdown
[Li et al. "Confidence Estimation Using Unlabeled Data." International Conference on Learning Representations, 2023.](https://mlanthology.org/iclr/2023/li2023iclr-confidence/)BibTeX
@inproceedings{li2023iclr-confidence,
title = {{Confidence Estimation Using Unlabeled Data}},
author = {Li, Chen and Hu, Xiaoling and Chen, Chao},
booktitle = {International Conference on Learning Representations},
year = {2023},
url = {https://mlanthology.org/iclr/2023/li2023iclr-confidence/}
}