A Survey on Fairness Without Demographics

Abstract

The issue of bias in Machine Learning (ML) models is a significant challenge for the machine learning community. Real-world biases can be embedded in the data used to train models, and prior studies have shown that ML models can learn and even amplify these biases. This can result in unfair treatment of individuals based on their inherent characteristics or sensitive attributes such as gender, race, or age. Ensuring fairness is crucial with the increasing use of ML models in high-stakes scenarios and has gained significant attention from researchers in recent years. However, the challenge of ensuring fairness becomes much greater when the assumption of full access to sensitive attributes does not hold. The settings where the hypothesis does not hold include cases where (1) only limited or noisy demographic information is available or (2) demographic information is entirely unobserved due to privacy restrictions. This survey reviews recent research efforts to enforce fairness when sensitive attributes are missing. We propose a taxonomy of existing works and, more importantly, highlight current challenges and future research directions to stimulate research in ML fairness in the setting of missing sensitive attributes.

Cite

Text

Kenfack et al. "A Survey on Fairness Without Demographics." Transactions on Machine Learning Research, 2024.

Markdown

[Kenfack et al. "A Survey on Fairness Without Demographics." Transactions on Machine Learning Research, 2024.](https://mlanthology.org/tmlr/2024/kenfack2024tmlr-survey/)

BibTeX

@article{kenfack2024tmlr-survey,
  title     = {{A Survey on Fairness Without Demographics}},
  author    = {Kenfack, Patrik Joslin and Kahou, Samira Ebrahimi and Aïvodji, Ulrich},
  journal   = {Transactions on Machine Learning Research},
  year      = {2024},
  url       = {https://mlanthology.org/tmlr/2024/kenfack2024tmlr-survey/}
}