Disparate Conditional Prediction in Multiclass Classifiers
Abstract
We propose methods for auditing multiclass classifiers for fairness under multiclass equalized odds, by estimating the deviation from equalized odds when the classifier is not completely fair. We generalize to multiclass classifiers the measure of Disparate Conditional Prediction (DCP), originally suggested by Sabato & Yom-Tov (2020) for binary classifiers. DCP is defined as the fraction of the population for which the classifier predicts with conditional prediction probabilities that differ from the closest common baseline. We provide new local-optimization methods for estimating the multiclass DCP under two different regimes, one in which the conditional confusion matrices for each protected sub-population are known, and one in which these cannot be estimated, for instance, because the classifier is inaccessible or because good-quality individual-level data is not available. These methods can be used to detect classifiers that likely treat a significant fraction of the population unfairly. Experiments demonstrate the accuracy of the methods. The code for the experiments is provided as supplementary material.
Cite
Text
Sabato et al. "Disparate Conditional Prediction in Multiclass Classifiers." Proceedings of the 42nd International Conference on Machine Learning, 2025.Markdown
[Sabato et al. "Disparate Conditional Prediction in Multiclass Classifiers." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/sabato2025icml-disparate/)BibTeX
@inproceedings{sabato2025icml-disparate,
title = {{Disparate Conditional Prediction in Multiclass Classifiers}},
author = {Sabato, Sivan and Treister, Eran and Yom-Tov, Elad},
booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
year = {2025},
pages = {52508-52525},
volume = {267},
url = {https://mlanthology.org/icml/2025/sabato2025icml-disparate/}
}