Generalizing to Unseen Domains: A Survey on Domain Generalization
Abstract
Domain generalization (DG), i.e., out-of-distribution generalization, has attracted increased interests in recent years. Domain generalization deals with a challenging setting where one or several different but related domain(s) are given, and the goal is to learn a model that can generalize to an unseen test domain. For years, great progress has been achieved. This paper presents the first review for recent advances in domain generalization. First, we provide a formal definition of domain generalization and discuss several related fields. Then, we categorize recent algorithms into three classes and present them in detail: data manipulation, representation learning, and learning strategy, each of which contains several popular algorithms. Third, we introduce the commonly used datasets and applications. Finally, we summarize existing literature and present some potential research topics for the future.
Cite
Text
Wang et al. "Generalizing to Unseen Domains: A Survey on Domain Generalization." International Joint Conference on Artificial Intelligence, 2021. doi:10.24963/IJCAI.2021/628Markdown
[Wang et al. "Generalizing to Unseen Domains: A Survey on Domain Generalization." International Joint Conference on Artificial Intelligence, 2021.](https://mlanthology.org/ijcai/2021/wang2021ijcai-generalizing/) doi:10.24963/IJCAI.2021/628BibTeX
@inproceedings{wang2021ijcai-generalizing,
title = {{Generalizing to Unseen Domains: A Survey on Domain Generalization}},
author = {Wang, Jindong and Lan, Cuiling and Liu, Chang and Ouyang, Yidong and Qin, Tao},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2021},
pages = {4627-4635},
doi = {10.24963/IJCAI.2021/628},
url = {https://mlanthology.org/ijcai/2021/wang2021ijcai-generalizing/}
}