Popularizing Fairness: Group Fairness and Individual Welfare

Abstract

Group-fair learning methods typically seek to ensure that some measure of prediction efficacy for (often historically) disadvantaged minority groups is comparable to that for the majority of the population. When a principal seeks to adopt a group-fair approach to replace another, the principal may face opposition from those who feel they may be harmed by the switch, and this, in turn, may deter adoption. We propose that a potential mitigation to this concern is to ensure that a group-fair model is also popular, in the sense that, for a majority of the target population, it yields a preferred distribution over outcomes compared with the conventional model. In this paper, we show that state of the art fair learning approaches are often unpopular in this sense. We propose several efficient algorithms for postprocessing an existing group-fair learning scheme to improve its popularity while retaining fairness. Through extensive experiments, we demonstrate that the proposed postprocessing approaches are highly effective in practice.

Cite

Text

Estornell et al. "Popularizing Fairness: Group Fairness and Individual Welfare." AAAI Conference on Artificial Intelligence, 2023. doi:10.1609/AAAI.V37I6.25910

Markdown

[Estornell et al. "Popularizing Fairness: Group Fairness and Individual Welfare." AAAI Conference on Artificial Intelligence, 2023.](https://mlanthology.org/aaai/2023/estornell2023aaai-popularizing/) doi:10.1609/AAAI.V37I6.25910

BibTeX

@inproceedings{estornell2023aaai-popularizing,
  title     = {{Popularizing Fairness: Group Fairness and Individual Welfare}},
  author    = {Estornell, Andrew and Das, Sanmay and Juba, Brendan and Vorobeychik, Yevgeniy},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2023},
  pages     = {7485-7493},
  doi       = {10.1609/AAAI.V37I6.25910},
  url       = {https://mlanthology.org/aaai/2023/estornell2023aaai-popularizing/}
}