Improving Predictor Reliability with Selective Recalibration
Abstract
A reliable deep learning system should be able to accurately express its confidence with respect to its predictions, a quality known as calibration. One of the most effective ways to produce reliable confidence estimates with a pre-trained model is by applying a post-hoc recalibration method. Popular recalibration methods like temperature scaling are typically fit on a small amount of data and work in the model’s output space, as opposed to the more expressive feature embedding space, and thus usually have only one or a handful of parameters. However, the target distribution to which they are applied is often complex and difficult to fit well with such a function. To this end we propose selective recalibration, where a selection model learns to reject some user-chosen proportion of the data in order to allow the recalibrator to focus on regions of the input space that can be well-captured by such a model. We provide theoretical analysis to motivate our algorithm, and test our method through comprehensive experiments on difficult medical imaging and zero-shot classification tasks. Our results show that selective recalibration consistently leads to significantly lower calibration error than a wide range of selection and recalibration baselines.
Cite
Text
Zollo et al. "Improving Predictor Reliability with Selective Recalibration." Transactions on Machine Learning Research, 2024.Markdown
[Zollo et al. "Improving Predictor Reliability with Selective Recalibration." Transactions on Machine Learning Research, 2024.](https://mlanthology.org/tmlr/2024/zollo2024tmlr-improving/)BibTeX
@article{zollo2024tmlr-improving,
title = {{Improving Predictor Reliability with Selective Recalibration}},
author = {Zollo, Thomas P and Deng, Zhun and Snell, Jake and Pitassi, Toniann and Zemel, Richard},
journal = {Transactions on Machine Learning Research},
year = {2024},
url = {https://mlanthology.org/tmlr/2024/zollo2024tmlr-improving/}
}