Meta-Learning the Invariant Representation for Domain Generalization
Abstract
Domain generalization studies how to generalize a machine learning model to unseen distributions. Learning invariant representation across different source distributions has been shown high effectiveness for domain generalization. However, the intrinsic possibility of overfitting in source domains can limit the generalization of invariance when faced with a target domain with large discrepancy to the source domains. To address this problem, we propose a meta-learning algorithm via bilevel optimization for domain generalization, where the inner-loop objective aims to minimize the discrepancy across different source domains while the outer-loop objective aims to minimize the discrepancy between source domains and a potential target domain. We show from a geometric perspective that the proposed algorithm can improve out-of-domain robustness for invariance learning. Empirically, we evaluate on five datasets and achieve the best results among a range of strong domain generalization baselines.
Cite
Text
Jia and Zhang. "Meta-Learning the Invariant Representation for Domain Generalization." Machine Learning, 2024. doi:10.1007/S10994-022-06256-YMarkdown
[Jia and Zhang. "Meta-Learning the Invariant Representation for Domain Generalization." Machine Learning, 2024.](https://mlanthology.org/mlj/2024/jia2024mlj-metalearning/) doi:10.1007/S10994-022-06256-YBibTeX
@article{jia2024mlj-metalearning,
title = {{Meta-Learning the Invariant Representation for Domain Generalization}},
author = {Jia, Chen and Zhang, Yue},
journal = {Machine Learning},
year = {2024},
pages = {1661-1681},
doi = {10.1007/S10994-022-06256-Y},
volume = {113},
url = {https://mlanthology.org/mlj/2024/jia2024mlj-metalearning/}
}