Learning Explanations That Are Hard to Vary
Abstract
In this paper, we investigate the principle that good explanations are hard to vary in the context of deep learning. We show that averaging gradients across examples -- akin to a logical OR of patterns -- can favor memorization and `patchwork' solutions that sew together different strategies, instead of identifying invariances. To inspect this, we first formalize a notion of consistency for minima of the loss surface, which measures to what extent a minimum appears only when examples are pooled. We then propose and experimentally validate a simple alternative algorithm based on a logical AND, that focuses on invariances and prevents memorization in a set of real-world tasks. Finally, using a synthetic dataset with a clear distinction between invariant and spurious mechanisms, we dissect learning signals and compare this approach to well-established regularizers.
Cite
Text
Parascandolo et al. "Learning Explanations That Are Hard to Vary." International Conference on Learning Representations, 2021.Markdown
[Parascandolo et al. "Learning Explanations That Are Hard to Vary." International Conference on Learning Representations, 2021.](https://mlanthology.org/iclr/2021/parascandolo2021iclr-learning/)BibTeX
@inproceedings{parascandolo2021iclr-learning,
title = {{Learning Explanations That Are Hard to Vary}},
author = {Parascandolo, Giambattista and Neitz, Alexander and Orvieto, Antonio and Gresele, Luigi and Schölkopf, Bernhard},
booktitle = {International Conference on Learning Representations},
year = {2021},
url = {https://mlanthology.org/iclr/2021/parascandolo2021iclr-learning/}
}