Gradient Matching for Domain Generalization

Abstract

Machine learning systems typically assume that the distributions of training and test sets match closely. However, a critical requirement of such systems in the real world is their ability to generalize to unseen domains. Here, we propose an _inter-domain gradient matching_ objective that targets domain generalization by maximizing the inner product between gradients from different domains. Since direct optimization of the gradient inner product can be computationally prohibitive --- it requires computation of second-order derivatives –-- we derive a simpler first-order algorithm named Fish that approximates its optimization. We perform experiments on the Wilds benchmark, which captures distribution shift in the real world, as well as the DomainBed benchmark that focuses more on synthetic-to-real transfer. Our method produces competitive results on both benchmarks, demonstrating its effectiveness across a wide range of domain generalization tasks.

Cite

Text

Shi et al. "Gradient Matching for Domain Generalization." International Conference on Learning Representations, 2022.

Markdown

[Shi et al. "Gradient Matching for Domain Generalization." International Conference on Learning Representations, 2022.](https://mlanthology.org/iclr/2022/shi2022iclr-gradient/)

BibTeX

@inproceedings{shi2022iclr-gradient,
  title     = {{Gradient Matching for Domain Generalization}},
  author    = {Shi, Yuge and Seely, Jeffrey and Torr, Philip and N, Siddharth and Hannun, Awni and Usunier, Nicolas and Synnaeve, Gabriel},
  booktitle = {International Conference on Learning Representations},
  year      = {2022},
  url       = {https://mlanthology.org/iclr/2022/shi2022iclr-gradient/}
}