Wasserstein Fair Classification
Abstract
We propose an approach to fair classification that enforces independence between the classifier outputs and sensitive information by minimizing Wasserstein-1 distances. The approach has desirable theoretical properties and is robust to specific choices of the threshold used to obtain class predictions from model outputs.We introduce different methods that enable hid-ing sensitive information at test time or have a simple and fast implementation. We show empirical performance against different fair-ness baselines on several benchmark fairness datasets.
Cite
Text
Jiang et al. "Wasserstein Fair Classification." Uncertainty in Artificial Intelligence, 2019.Markdown
[Jiang et al. "Wasserstein Fair Classification." Uncertainty in Artificial Intelligence, 2019.](https://mlanthology.org/uai/2019/jiang2019uai-wasserstein/)BibTeX
@inproceedings{jiang2019uai-wasserstein,
title = {{Wasserstein Fair Classification}},
author = {Jiang, Ray and Pacchiano, Aldo and Stepleton, Tom and Jiang, Heinrich and Chiappa, Silvia},
booktitle = {Uncertainty in Artificial Intelligence},
year = {2019},
pages = {862-872},
volume = {115},
url = {https://mlanthology.org/uai/2019/jiang2019uai-wasserstein/}
}