Causally Motivated Shortcut Removal Using Auxiliary Labels
Abstract
Shortcut learning, in which models make use of easy-to-represent but unstable associations, is a major failure mode for robust machine learning. We study a flexible, causally-motivated approach to training robust predictors by discouraging the use of specific shortcuts, focusing on a common setting where a robust predictor could achieve optimal i.i.d generalization in principle, but is overshadowed by a shortcut predictor in practice. Our approach uses auxiliary labels, typically available at training time, to enforce conditional independences implied by the causal graph. We show both theoretically and empirically that causally-motivated regularization schemes (a) lead to more robust estimators that generalize well under distribution shift, and (b) have better finite sample efficiency compared to usual regularization schemes, even when no shortcut is present. Our analysis highlights important theoretical properties of training techniques commonly used in the causal inference, fairness, and disentanglement literatures. Our code is available at github.com/mymakar/causally_motivated_shortcut_removal
Cite
Text
Makar et al. "Causally Motivated Shortcut Removal Using Auxiliary Labels." Artificial Intelligence and Statistics, 2022.Markdown
[Makar et al. "Causally Motivated Shortcut Removal Using Auxiliary Labels." Artificial Intelligence and Statistics, 2022.](https://mlanthology.org/aistats/2022/makar2022aistats-causally/)BibTeX
@inproceedings{makar2022aistats-causally,
title = {{Causally Motivated Shortcut Removal Using Auxiliary Labels}},
author = {Makar, Maggie and Packer, Ben and Moldovan, Dan and Blalock, Davis and Halpern, Yoni and D’Amour, Alexander},
booktitle = {Artificial Intelligence and Statistics},
year = {2022},
pages = {739-766},
volume = {151},
url = {https://mlanthology.org/aistats/2022/makar2022aistats-causally/}
}