Superhuman Fairness

Abstract

The fairness of machine learning-based decisions has become an increasingly important focus in the design of supervised machine learning methods. Most fairness approaches optimize a specified trade-off between performance measure(s) (e.g., accuracy, log loss, or AUC) and fairness metric(s) (e.g., demographic parity, equalized odds). This begs the question: are the right performance-fairness trade-offs being specified? We instead re-cast fair machine learning as an imitation learning task by introducing superhuman fairness, which seeks to simultaneously outperform human decisions on multiple predictive performance and fairness measures. We demonstrate the benefits of this approach given suboptimal decisions.

Cite

Text

Memarrast et al. "Superhuman Fairness." International Conference on Machine Learning, 2023.

Markdown

[Memarrast et al. "Superhuman Fairness." International Conference on Machine Learning, 2023.](https://mlanthology.org/icml/2023/memarrast2023icml-superhuman/)

BibTeX

@inproceedings{memarrast2023icml-superhuman,
  title     = {{Superhuman Fairness}},
  author    = {Memarrast, Omid and Vu, Linh and Ziebart, Brian D},
  booktitle = {International Conference on Machine Learning},
  year      = {2023},
  pages     = {24420-24435},
  volume    = {202},
  url       = {https://mlanthology.org/icml/2023/memarrast2023icml-superhuman/}
}