Fishr: Invariant Gradient Variances for Out-of-Distribution Generalization
Abstract
Learning robust models that generalize well under changes in the data distribution is critical for real-world applications. To this end, there has been a growing surge of interest to learn simultaneously from multiple training domains - while enforcing different types of invariance across those domains. Yet, all existing approaches fail to show systematic benefits under controlled evaluation protocols. In this paper, we introduce a new regularization - named Fishr - that enforces domain invariance in the space of the gradients of the loss: specifically, the domain-level variances of gradients are matched across training domains. Our approach is based on the close relations between the gradient covariance, the Fisher Information and the Hessian of the loss: in particular, we show that Fishr eventually aligns the domain-level loss landscapes locally around the final weights. Extensive experiments demonstrate the effectiveness of Fishr for out-of-distribution generalization. Notably, Fishr improves the state of the art on the DomainBed benchmark and performs consistently better than Empirical Risk Minimization. Our code is available at https://github.com/alexrame/fishr.
Cite
Text
Rame et al. "Fishr: Invariant Gradient Variances for Out-of-Distribution Generalization." International Conference on Machine Learning, 2022.Markdown
[Rame et al. "Fishr: Invariant Gradient Variances for Out-of-Distribution Generalization." International Conference on Machine Learning, 2022.](https://mlanthology.org/icml/2022/rame2022icml-fishr/)BibTeX
@inproceedings{rame2022icml-fishr,
title = {{Fishr: Invariant Gradient Variances for Out-of-Distribution Generalization}},
author = {Rame, Alexandre and Dancette, Corentin and Cord, Matthieu},
booktitle = {International Conference on Machine Learning},
year = {2022},
pages = {18347-18377},
volume = {162},
url = {https://mlanthology.org/icml/2022/rame2022icml-fishr/}
}