SoFaiR: Single Shot Fair Representation Learning

Abstract

To avoid discriminatory uses of their data, organizations can learn to map them into a representation that filters out information related to sensitive attributes. However, all existing methods in fair representation learning generate a fairness-information trade-off. To achieve different points on the fairness-information plane, one must train different models. In this paper, we first demonstrate that fairness-information trade-offs are fully characterized by rate-distortion trade-offs. Then, we use this key result and propose SoFaiR, a single shot fair representation learning method that generates with one trained model many points on the fairness-information plane. Besides its computational saving, our single-shot approach is, to the extent of our knowledge, the first fair representation learning method that explains what information is affected by changes in the fairness / distortion properties of the representation. Empirically, we find on three datasets that SoFaiR achieves similar fairness information trade-offs as its multi-shot counterparts.

Cite

Text

Gitiaux and Rangwala. "SoFaiR: Single Shot Fair Representation Learning." International Joint Conference on Artificial Intelligence, 2022. doi:10.24963/IJCAI.2022/97

Markdown

[Gitiaux and Rangwala. "SoFaiR: Single Shot Fair Representation Learning." International Joint Conference on Artificial Intelligence, 2022.](https://mlanthology.org/ijcai/2022/gitiaux2022ijcai-sofair/) doi:10.24963/IJCAI.2022/97

BibTeX

@inproceedings{gitiaux2022ijcai-sofair,
  title     = {{SoFaiR: Single Shot Fair Representation Learning}},
  author    = {Gitiaux, Xavier and Rangwala, Huzefa},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2022},
  pages     = {687-695},
  doi       = {10.24963/IJCAI.2022/97},
  url       = {https://mlanthology.org/ijcai/2022/gitiaux2022ijcai-sofair/}
}