Robustness Quantification: A New Method for Assessing the Reliability of the Predictions of a Classifier
Abstract
Based on existing ideas in the field of imprecise probabilities, we present a new approach for assessing the reliability of the individual predictions of a generative probabilistic classifier. We call this approach robustness quantification, compare it to uncertainty quantification, and demonstrate that it continues to work well even for classifiers that are learned from small training sets that are sampled from a shifted distribution.
Cite
Text
Detavernier and De Bock. "Robustness Quantification: A New Method for Assessing the Reliability of the Predictions of a Classifier." Proceedings of the Fourteenth International Symposium on Imprecise Probabilities: Theories and Applications, 2025.Markdown
[Detavernier and De Bock. "Robustness Quantification: A New Method for Assessing the Reliability of the Predictions of a Classifier." Proceedings of the Fourteenth International Symposium on Imprecise Probabilities: Theories and Applications, 2025.](https://mlanthology.org/isipta/2025/detavernier2025isipta-robustness/)BibTeX
@inproceedings{detavernier2025isipta-robustness,
title = {{Robustness Quantification: A New Method for Assessing the Reliability of the Predictions of a Classifier}},
author = {Detavernier, Adrián and De Bock, Jasper},
booktitle = {Proceedings of the Fourteenth International Symposium on Imprecise Probabilities: Theories and Applications},
year = {2025},
pages = {126-136},
volume = {290},
url = {https://mlanthology.org/isipta/2025/detavernier2025isipta-robustness/}
}