MetaReg: Towards Domain Generalization Using Meta-Regularization

Abstract

Training models that generalize to new domains at test time is a problem of fundamental importance in machine learning. In this work, we encode this notion of domain generalization using a novel regularization function. We pose the problem of finding such a regularization function in a Learning to Learn (or) meta-learning framework. The objective of domain generalization is explicitly modeled by learning a regularizer that makes the model trained on one domain to perform well on another domain. Experimental validations on computer vision and natural language datasets indicate that our method can learn regularizers that achieve good cross-domain generalization.

Cite

Text

Balaji et al. "MetaReg: Towards Domain Generalization Using Meta-Regularization." Neural Information Processing Systems, 2018.

Markdown

[Balaji et al. "MetaReg: Towards Domain Generalization Using Meta-Regularization." Neural Information Processing Systems, 2018.](https://mlanthology.org/neurips/2018/balaji2018neurips-metareg/)

BibTeX

@inproceedings{balaji2018neurips-metareg,
  title     = {{MetaReg: Towards Domain Generalization Using Meta-Regularization}},
  author    = {Balaji, Yogesh and Sankaranarayanan, Swami and Chellappa, Rama},
  booktitle = {Neural Information Processing Systems},
  year      = {2018},
  pages     = {998-1008},
  url       = {https://mlanthology.org/neurips/2018/balaji2018neurips-metareg/}
}