Individual Arbitrariness and Group Fairness

Abstract

Machine learning tasks may admit multiple competing models that achieve similar performance yet produce conflicting outputs for individual samples---a phenomenon known as predictive multiplicity. We demonstrate that fairness interventions in machine learning optimized solely for group fairness and accuracy can exacerbate predictive multiplicity. Consequently, state-of-the-art fairness interventions can mask high predictive multiplicity behind favorable group fairness and accuracy metrics. We argue that a third axis of ``arbitrariness'' should be considered when deploying models to aid decision-making in applications of individual-level impact.To address this challenge, we propose an ensemble algorithm applicable to any fairness intervention that provably ensures more consistent predictions.

Cite

Text

Long et al. "Individual Arbitrariness and Group Fairness." Neural Information Processing Systems, 2023.

Markdown

[Long et al. "Individual Arbitrariness and Group Fairness." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/long2023neurips-individual/)

BibTeX

@inproceedings{long2023neurips-individual,
  title     = {{Individual Arbitrariness and Group Fairness}},
  author    = {Long, Carol and Hsu, Hsiang and Alghamdi, Wael and Calmon, Flavio},
  booktitle = {Neural Information Processing Systems},
  year      = {2023},
  url       = {https://mlanthology.org/neurips/2023/long2023neurips-individual/}
}