Learning to Generalize: Meta-Learning for Domain Generalization

Abstract

Domain shift refers to the well known problem that a model trained in one source domain performs poorly when appliedto a target domain with different statistics. Domain Generalization (DG) techniques attempt to alleviate this issue by producing models which by design generalize well to novel testing domains. We propose a novel meta-learning method for domain generalization. Rather than designing a specific model that is robust to domain shift as in most previous DG work, we propose a model agnostic training procedure for DG. Our algorithm simulates train/test domain shift during training by synthesizing virtual testing domains within each mini-batch. The meta-optimization objective requires that steps to improve training domain performance should also improve testing domain performance. This meta-learning procedure trains models with good generalization ability to novel domains. We evaluate our method and achieve state of the art results on a recent cross-domain image classification benchmark, as well demonstrating its potential on two classic reinforcement learning tasks.

Cite

Text

Li et al. "Learning to Generalize: Meta-Learning for Domain Generalization." AAAI Conference on Artificial Intelligence, 2018. doi:10.1609/AAAI.V32I1.11596

Markdown

[Li et al. "Learning to Generalize: Meta-Learning for Domain Generalization." AAAI Conference on Artificial Intelligence, 2018.](https://mlanthology.org/aaai/2018/li2018aaai-learning-a/) doi:10.1609/AAAI.V32I1.11596

BibTeX

@inproceedings{li2018aaai-learning-a,
  title     = {{Learning to Generalize: Meta-Learning for Domain Generalization}},
  author    = {Li, Da and Yang, Yongxin and Song, Yi-Zhe and Hospedales, Timothy M.},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2018},
  pages     = {3490-3497},
  doi       = {10.1609/AAAI.V32I1.11596},
  url       = {https://mlanthology.org/aaai/2018/li2018aaai-learning-a/}
}