Are All Classes Created Equal? Domain Generalization for Domain-Linked Classes
Abstract
Domain generalization (DG) focuses on transferring domain-invariant knowledge from multiple source domains (available at train time) to an $\textit{a priori}$ unseen target domain(s). This task implicitly assumes that a class of interest is expressed in multiple source domains ($\textit{domain-shared}$), which helps break the spurious correlations between domain and class and enables domain-invariant learning. However, we observe that this results in extremely poor generalization performance for classes only expressed in a specific domain ($\textit{domain-linked}$). To this end, we develop a contrastive and fairness based algorithm -- $\texttt{FOND}$ -- to learn generalizable representations for these domain-linked classes by transferring useful representations from domain-shared classes. We perform rigorous experiments against popular baselines across benchmark datasets to demonstrate that given a sufficient number of domain-shared classes $\texttt{FOND}$ achieves SOTA results for domain-linked DG.
Cite
Text
Kaai et al. "Are All Classes Created Equal? Domain Generalization for Domain-Linked Classes." NeurIPS 2023 Workshops: DistShift, 2023.Markdown
[Kaai et al. "Are All Classes Created Equal? Domain Generalization for Domain-Linked Classes." NeurIPS 2023 Workshops: DistShift, 2023.](https://mlanthology.org/neuripsw/2023/kaai2023neuripsw-all/)BibTeX
@inproceedings{kaai2023neuripsw-all,
title = {{Are All Classes Created Equal? Domain Generalization for Domain-Linked Classes}},
author = {Kaai, Kimathi and Hossain, Saad and Rambhatla, Sirisha},
booktitle = {NeurIPS 2023 Workshops: DistShift},
year = {2023},
url = {https://mlanthology.org/neuripsw/2023/kaai2023neuripsw-all/}
}