How Flawed Is ECE? an Analysis via Logit Smoothing
Abstract
Informally, a model is calibrated if its predictions are correct with a probability that matches the confidence of the prediction. By far the most common method in the literature for measuring calibration is the expected calibration error (ECE). Recent work, however, has pointed out drawbacks of ECE, such as the fact that it is discontinuous in the space of predictors. In this work, we ask: how fundamental are these issues, and what are their impacts on existing results? Towards this end, we completely characterize the discontinuities of ECE with respect to general probability measures on Polish spaces. We then use the nature of these discontinuities to motivate a novel continuous, easily estimated miscalibration metric, which we term Logit-Smoothed ECE (LS-ECE). By comparing the ECE and LS-ECE of pre-trained image classification models, we show in initial experiments that binned ECE closely tracks LS-ECE, indicating that the theoretical pathologies of ECE may be avoidable in practice.
Cite
Text
Chidambaram et al. "How Flawed Is ECE? an Analysis via Logit Smoothing." International Conference on Machine Learning, 2024.Markdown
[Chidambaram et al. "How Flawed Is ECE? an Analysis via Logit Smoothing." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/chidambaram2024icml-flawed/)BibTeX
@inproceedings{chidambaram2024icml-flawed,
title = {{How Flawed Is ECE? an Analysis via Logit Smoothing}},
author = {Chidambaram, Muthu and Lee, Holden and Mcswiggen, Colin and Rezchikov, Semon},
booktitle = {International Conference on Machine Learning},
year = {2024},
pages = {8417-8435},
volume = {235},
url = {https://mlanthology.org/icml/2024/chidambaram2024icml-flawed/}
}