Probably Approximately Metric-Fair Learning

Abstract

The seminal work of Dwork et al. [ITCS 2012] introduced a metric-based notion of individual fairness: given a task-specific similarity metric, their notion required that every pair of similar individuals should be treated similarly. In the context of machine learning, however, individual fairness does not generalize from a training set to the underlying population. We show that this can lead to computational intractability even for simple fair-learning tasks. With this motivation in mind, we introduce and study a relaxed notion of approximate metric-fairness: for a random pair of individuals sampled from the population, with all but a small probability of error, if they are similar then they should be treated similarly. We formalize the goal of achieving approximate metric-fairness simultaneously with best-possible accuracy as Probably Approximately Correct and Fair (PACF) Learning. We show that approximate metric-fairness does generalize, and leverage these generalization guarantees to construct polynomial-time PACF learning algorithms for the classes of linear and logistic predictors.

Cite

Text

Yona and Rothblum. "Probably Approximately Metric-Fair Learning." International Conference on Machine Learning, 2018.

Markdown

[Yona and Rothblum. "Probably Approximately Metric-Fair Learning." International Conference on Machine Learning, 2018.](https://mlanthology.org/icml/2018/yona2018icml-probably/)

BibTeX

@inproceedings{yona2018icml-probably,
  title     = {{Probably Approximately Metric-Fair Learning}},
  author    = {Yona, Gal and Rothblum, Guy},
  booktitle = {International Conference on Machine Learning},
  year      = {2018},
  pages     = {5680-5688},
  volume    = {80},
  url       = {https://mlanthology.org/icml/2018/yona2018icml-probably/}
}