Standardized Interpretable Fairness Measures for Continuous Risk Scores

Abstract

We propose a standardized version of fairness measures for continuous scores with a reasonable interpretation based on the Wasserstein distance. Our measures are easily computable and well suited for quantifying and interpreting the strength of group disparities as well as for comparing biases across different models, datasets, or time points. We derive a link between the different families of existing fairness measures for scores and show that the proposed standardized fairness measures outperform ROC-based fairness measures because they are more explicit and can quantify significant biases that ROC-based fairness measures miss.

Cite

Text

Becker et al. "Standardized Interpretable Fairness Measures for Continuous Risk Scores." International Conference on Machine Learning, 2024.

Markdown

[Becker et al. "Standardized Interpretable Fairness Measures for Continuous Risk Scores." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/becker2024icml-standardized/)

BibTeX

@inproceedings{becker2024icml-standardized,
  title     = {{Standardized Interpretable Fairness Measures for Continuous Risk Scores}},
  author    = {Becker, Ann-Kristin and Dumitrasc, Oana and Broelemann, Klaus},
  booktitle = {International Conference on Machine Learning},
  year      = {2024},
  pages     = {3327-3346},
  volume    = {235},
  url       = {https://mlanthology.org/icml/2024/becker2024icml-standardized/}
}