When Do Minimax-Fair Learning and Empirical Risk Minimization Coincide?

Abstract

Minimax-fair machine learning minimizes the error for the worst-off group. However, empirical evidence suggests that when sophisticated models are trained with standard empirical risk minimization (ERM), they often have the same performance on the worst-off group as a minimax-trained model. Our work makes this counter-intuitive observation concrete. We prove that if the hypothesis class is sufficiently expressive and the group information is recoverable from the features, ERM and minimax-fairness learning formulations indeed have the same performance on the worst-off group. We provide additional empirical evidence of how this observation holds on a wide range of datasets and hypothesis classes. Since ERM is fundamentally easier than minimax optimization, our findings have implications on the practice of fair machine learning.

Cite

Text

Singh et al. "When Do Minimax-Fair Learning and Empirical Risk Minimization Coincide?." International Conference on Machine Learning, 2023.

Markdown

[Singh et al. "When Do Minimax-Fair Learning and Empirical Risk Minimization Coincide?." International Conference on Machine Learning, 2023.](https://mlanthology.org/icml/2023/singh2023icml-minimaxfair/)

BibTeX

@inproceedings{singh2023icml-minimaxfair,
  title     = {{When Do Minimax-Fair Learning and Empirical Risk Minimization Coincide?}},
  author    = {Singh, Harvineet and Kleindessner, Matthäus and Cevher, Volkan and Chunara, Rumi and Russell, Chris},
  booktitle = {International Conference on Machine Learning},
  year      = {2023},
  pages     = {31969-31989},
  volume    = {202},
  url       = {https://mlanthology.org/icml/2023/singh2023icml-minimaxfair/}
}