Learning Calibrated Belief Functions from Conformal Predictions
Abstract
We consider the problem of supervised classification. We focus on the problem of calibrating the classifier’s outputs. We show that the p-values provided by Inductive Conformal Prediction (ICP) can be interpreted as a possibility distribution over the set of classes. This allows us to use ICP to compute a predictive belief function which is calibrated by construction. We also propose a learning method which provides p-values in a simpler and faster way, by making use of a multi-output regression model. Results obtained on the Cifar10 and Digits data sets show that our approach is comparable to standard ICP in terms of accuracy and calibration, while offering a reduced complexity and avoiding the use of a calibration set.
Cite
Text
Martin Bordini et al. "Learning Calibrated Belief Functions from Conformal Predictions." Proceedings of the Thirteenth International Symposium on Imprecise Probability: Theories and Applications, 2023.Markdown
[Martin Bordini et al. "Learning Calibrated Belief Functions from Conformal Predictions." Proceedings of the Thirteenth International Symposium on Imprecise Probability: Theories and Applications, 2023.](https://mlanthology.org/isipta/2023/martinbordini2023isipta-learning/)BibTeX
@inproceedings{martinbordini2023isipta-learning,
title = {{Learning Calibrated Belief Functions from Conformal Predictions}},
author = {Martin Bordini, Vitor and Destercke, Sébastien and Quost, Benjamin},
booktitle = {Proceedings of the Thirteenth International Symposium on Imprecise Probability: Theories and Applications},
year = {2023},
pages = {311-320},
volume = {215},
url = {https://mlanthology.org/isipta/2023/martinbordini2023isipta-learning/}
}