Fairwashing Explanations with Off-Manifold Detergent
Abstract
Explanation methods promise to make black-box classifiers more transparent. As a result, it is hoped that they can act as proof for a sensible, fair and trustworthy decision-making process of the algorithm and thereby increase its acceptance by the end-users. In this paper, we show both theoretically and experimentally that these hopes are presently unfounded. Specifically, we show that, for any classifier $g$, one can always construct another classifier $\tilde{g}$ which has the same behavior on the data (same train, validation, and test error) but has arbitrarily manipulated explanation maps. We derive this statement theoretically using differential geometry and demonstrate it experimentally for various explanation methods, architectures, and datasets. Motivated by our theoretical insights, we then propose a modification of existing explanation methods which makes them significantly more robust.
Cite
Text
Anders et al. "Fairwashing Explanations with Off-Manifold Detergent." International Conference on Machine Learning, 2020.Markdown
[Anders et al. "Fairwashing Explanations with Off-Manifold Detergent." International Conference on Machine Learning, 2020.](https://mlanthology.org/icml/2020/anders2020icml-fairwashing/)BibTeX
@inproceedings{anders2020icml-fairwashing,
title = {{Fairwashing Explanations with Off-Manifold Detergent}},
author = {Anders, Christopher and Pasliev, Plamen and Dombrowski, Ann-Kathrin and Müller, Klaus-Robert and Kessel, Pan},
booktitle = {International Conference on Machine Learning},
year = {2020},
pages = {314-323},
volume = {119},
url = {https://mlanthology.org/icml/2020/anders2020icml-fairwashing/}
}