Generative Interventions for Causal Learning

Abstract

We introduce a framework for learning robust visual representations that generalize to new viewpoints, backgrounds, and scene contexts. Discriminative models often learn naturally occurring spurious correlations, which cause them to fail on images outside of the training distribution. In this paper, we show that we can steer generative models to manufacture interventions on features caused by confounding factors. Experiments, visualizations, and theoretical results show this method learns robust representations more consistent with the underlying causal relationships. Our approach improves performance on multiple datasets demanding out-of-distribution generalization, and we demonstrate state-of-the-art performance generalizing from ImageNet to ObjectNet dataset.

Cite

Text

Mao et al. "Generative Interventions for Causal Learning." Conference on Computer Vision and Pattern Recognition, 2021. doi:10.1109/CVPR46437.2021.00394

Markdown

[Mao et al. "Generative Interventions for Causal Learning." Conference on Computer Vision and Pattern Recognition, 2021.](https://mlanthology.org/cvpr/2021/mao2021cvpr-generative/) doi:10.1109/CVPR46437.2021.00394

BibTeX

@inproceedings{mao2021cvpr-generative,
  title     = {{Generative Interventions for Causal Learning}},
  author    = {Mao, Chengzhi and Cha, Augustine and Gupta, Amogh and Wang, Hao and Yang, Junfeng and Vondrick, Carl},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2021},
  pages     = {3947-3956},
  doi       = {10.1109/CVPR46437.2021.00394},
  url       = {https://mlanthology.org/cvpr/2021/mao2021cvpr-generative/}
}