Individual Fairness Revisited: Transferring Techniques from Adversarial Robustness

Abstract

We turn the definition of individual fairness on its head - rather than ascertaining the fairness of a model given a predetermined metric, we find a metric for a given model that satisfies individual fairness. This can facilitate the discussion on the fairness of a model, addressing the issue that it may be difficult to specify a priori a suitable metric. Our contributions are twofold: First, we introduce the definition of a minimal metric and characterize the behavior of models in terms of minimal metrics. Second, for more complicated models, we apply the mechanism of randomized smoothing from adversarial robustness to make them individually fair under a given weighted Lp metric. Our experiments show that adapting the minimal metrics of linear models to more complicated neural networks can lead to meaningful and interpretable fairness guarantees at little cost to utility.

Cite

Text

Yeom and Fredrikson. "Individual Fairness Revisited: Transferring Techniques from Adversarial Robustness." International Joint Conference on Artificial Intelligence, 2020. doi:10.24963/IJCAI.2020/61

Markdown

[Yeom and Fredrikson. "Individual Fairness Revisited: Transferring Techniques from Adversarial Robustness." International Joint Conference on Artificial Intelligence, 2020.](https://mlanthology.org/ijcai/2020/yeom2020ijcai-individual/) doi:10.24963/IJCAI.2020/61

BibTeX

@inproceedings{yeom2020ijcai-individual,
  title     = {{Individual Fairness Revisited: Transferring Techniques from Adversarial Robustness}},
  author    = {Yeom, Samuel and Fredrikson, Matt},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2020},
  pages     = {437-443},
  doi       = {10.24963/IJCAI.2020/61},
  url       = {https://mlanthology.org/ijcai/2020/yeom2020ijcai-individual/}
}