Assuming Locally Equal Calibration Errors for Non-Parametric Multiclass Calibration
Abstract
A probabilistic classifier is considered calibrated if it outputs probabilities equal to the expected class distribution given the classifier's output. Calibration is essential in safety-critical tasks where small deviations between the predicted probabilities and the actually observed class proportions can incur high costs. A common approach to improve the calibration of a classifier is to use a hold-out data set and a post-hoc calibration method to learn a correcting transformation for the classifier's output. This work explores the field of post-hoc calibration methods for multi-class classifiers and formulates two assumptions about the probability simplex which have been used by many existing non-parametric calibration methods, but despite this, have never been explicitly stated: assuming locally equal label distributions or assuming locally equal calibration errors. Based on the latter assumption, an intuitive non-parametric post-hoc calibration method is proposed, which is shown to offer improvements to the state-of-the-art according to the expected calibration error metric on CIFAR-10 and CIFAR-100 data sets.
Cite
Text
Valk and Kull. "Assuming Locally Equal Calibration Errors for Non-Parametric Multiclass Calibration." Transactions on Machine Learning Research, 2023.Markdown
[Valk and Kull. "Assuming Locally Equal Calibration Errors for Non-Parametric Multiclass Calibration." Transactions on Machine Learning Research, 2023.](https://mlanthology.org/tmlr/2023/valk2023tmlr-assuming/)BibTeX
@article{valk2023tmlr-assuming,
title = {{Assuming Locally Equal Calibration Errors for Non-Parametric Multiclass Calibration}},
author = {Valk, Kaspar and Kull, Meelis},
journal = {Transactions on Machine Learning Research},
year = {2023},
url = {https://mlanthology.org/tmlr/2023/valk2023tmlr-assuming/}
}