Two Sides of Miscalibration: Identifying over and Under-Confidence Prediction for Network Calibration

Abstract

Proper confidence calibration of deep neural networks is essential for reliable predictions in safety-critical tasks. Miscalibration can lead to model over-confidence and/or under-confidence; i.e., the model’s confidence in its prediction can be greater or less than the model’s accuracy. Recent studies have highlighted the over-confidence issue by introducing calibration techniques and demonstrated success on various tasks. However, miscalibration through under-confidence has not yet to receive much attention. In this paper, we address the necessity of paying attention to the under-confidence issue. We first introduce a novel metric, a miscalibration score, to identify the overall and class-wise calibration status, including being over or under-confident. Our proposed metric reveals the pitfalls of existing calibration techniques, where they often overly calibrate the model and worsen under-confident predictions. Then we utilize the class-wise miscalibration score as a proxy to design a calibration technique that can tackle both over and under-confidence. We report extensive experiments that show our proposed methods substantially outperforming existing calibration techniques. We also validate our proposed calibration technique on an automatic failure detection task with a risk-coverage curve, reporting that our methods improve failure detection as well as trustworthiness of the model. The code are available at \url{https://github.com/AoShuang92/miscalibration_TS}.

Cite

Text

Ao et al. "Two Sides of Miscalibration: Identifying over and Under-Confidence Prediction for Network Calibration." Uncertainty in Artificial Intelligence, 2023.

Markdown

[Ao et al. "Two Sides of Miscalibration: Identifying over and Under-Confidence Prediction for Network Calibration." Uncertainty in Artificial Intelligence, 2023.](https://mlanthology.org/uai/2023/ao2023uai-two/)

BibTeX

@inproceedings{ao2023uai-two,
  title     = {{Two Sides of Miscalibration: Identifying over and Under-Confidence Prediction for Network Calibration}},
  author    = {Ao, Shuang and Rueger, Stefan and Siddharthan, Advaith},
  booktitle = {Uncertainty in Artificial Intelligence},
  year      = {2023},
  pages     = {77-87},
  volume    = {216},
  url       = {https://mlanthology.org/uai/2023/ao2023uai-two/}
}