Fairness with Overlapping Groups; a Probabilistic Perspective
Abstract
In algorithmically fair prediction problems, a standard goal is to ensure the equality of fairness metrics across multiple overlapping groups simultaneously. We reconsider this standard fair classification problem using a probabilistic population analysis, which, in turn, reveals the Bayes-optimal classifier. Our approach unifies a variety of existing group-fair classification methods and enables extensions to a wide range of non-decomposable multiclass performance metrics and fairness measures. The Bayes-optimal classifier further inspires consistent procedures for algorithmically fair classification with overlapping groups. On a variety of real datasets, the proposed approach outperforms baselines in terms of its fairness-performance tradeoff.
Cite
Text
Yang et al. "Fairness with Overlapping Groups; a Probabilistic Perspective." Neural Information Processing Systems, 2020.Markdown
[Yang et al. "Fairness with Overlapping Groups; a Probabilistic Perspective." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/yang2020neurips-fairness/)BibTeX
@inproceedings{yang2020neurips-fairness,
title = {{Fairness with Overlapping Groups; a Probabilistic Perspective}},
author = {Yang, Forest and Cisse, Mouhamadou and Koyejo, Sanmi},
booktitle = {Neural Information Processing Systems},
year = {2020},
url = {https://mlanthology.org/neurips/2020/yang2020neurips-fairness/}
}