Fair Learning with Private Demographic Data
Abstract
Sensitive attributes such as race are rarely available to learners in real world settings as their collection is often restricted by laws and regulations. We give a scheme that allows individuals to release their sensitive information privately while still allowing any downstream entity to learn non-discriminatory predictors. We show how to adapt non-discriminatory learners to work with privatized protected attributes giving theoretical guarantees on performance. Finally, we highlight how the methodology could apply to learning fair predictors in settings where protected attributes are only available for a subset of the data.
Cite
Text
Mozannar et al. "Fair Learning with Private Demographic Data." International Conference on Machine Learning, 2020.Markdown
[Mozannar et al. "Fair Learning with Private Demographic Data." International Conference on Machine Learning, 2020.](https://mlanthology.org/icml/2020/mozannar2020icml-fair/)BibTeX
@inproceedings{mozannar2020icml-fair,
title = {{Fair Learning with Private Demographic Data}},
author = {Mozannar, Hussein and Ohannessian, Mesrob and Srebro, Nathan},
booktitle = {International Conference on Machine Learning},
year = {2020},
pages = {7066-7075},
volume = {119},
url = {https://mlanthology.org/icml/2020/mozannar2020icml-fair/}
}