Null-Sampling for Interpretable and Fair Representations
Abstract
We propose to learn invariant representations, in the data domain, to achieve interpretability in algorithmic fairness. Invariance implies a selectivity for high level, relevant correlations w.r.t. class label annotations, and a robustness to irrelevant correlations with protected characteristics such as race or gender. We introduce a non-trivial setup in which the training set exhibits a strong bias such that class label annotations are irrelevant and spurious correlations cannot be distinguished. To address this problem, we introduce an adversarially trained model with a null-sampling procedure to produce invariant representations in the data domain. To enable disentanglement, a partially-labelled representative set is used. By placing the representations into the data domain, the changes made by the model are easily examinable by human auditors. We show the effectiveness of our method on both image and tabular datasets: Coloured MNIST, the CelebA and the Adult dataset.
Cite
Text
Kehrenberg et al. "Null-Sampling for Interpretable and Fair Representations." Proceedings of the European Conference on Computer Vision (ECCV), 2020. doi:10.1007/978-3-030-58574-7_34Markdown
[Kehrenberg et al. "Null-Sampling for Interpretable and Fair Representations." Proceedings of the European Conference on Computer Vision (ECCV), 2020.](https://mlanthology.org/eccv/2020/kehrenberg2020eccv-nullsampling/) doi:10.1007/978-3-030-58574-7_34BibTeX
@inproceedings{kehrenberg2020eccv-nullsampling,
title = {{Null-Sampling for Interpretable and Fair Representations}},
author = {Kehrenberg, Thomas and Bartlett, Myles and Thomas, Oliver and Quadrianto, Novi},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2020},
doi = {10.1007/978-3-030-58574-7_34},
url = {https://mlanthology.org/eccv/2020/kehrenberg2020eccv-nullsampling/}
}