Understanding and Robustifying Sub-Domain Alignment for Domain Adaptation
Abstract
In unsupervised domain adaptation (UDA), aligning source and target domains improves the predictive performance of learned models on the target domain. A common methodological improvement in alignment methods is to divide the domains and align sub-domains instead. These sub-domain-based algorithms have demonstrated great empirical success but lack theoretical support. In this work, we establish a rigorous theoretical understanding of the advantages of these methods that have the potential to enhance their overall impact on the field. Our theory uncovers that sub-domain-based methods optimize an error bound that is at least as strong as non-sub-domain-based error bounds and is empirically verified to be much stronger. Furthermore, our analysis indicates that when the marginal weights of sub-domains shift between source and target tasks, the performance of these methods may be compromised. We therefore implement an algorithm to robustify sub-domain alignment for domain adaptation under sub-domain shift, offering a valuable adaptation strategy for future sub-domain-based methods. Empirical experiments across various benchmarks validate our theoretical insights, prove the necessity for the proposed adaptation strategy, and demonstrate the algorithm's competitiveness in handling label shift.
Cite
Text
Liu et al. "Understanding and Robustifying Sub-Domain Alignment for Domain Adaptation." Transactions on Machine Learning Research, 2025.Markdown
[Liu et al. "Understanding and Robustifying Sub-Domain Alignment for Domain Adaptation." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/liu2025tmlr-understanding/)BibTeX
@article{liu2025tmlr-understanding,
title = {{Understanding and Robustifying Sub-Domain Alignment for Domain Adaptation}},
author = {Liu, Yiling and Dong, Juncheng and Jiang, Ziyang and Aloui, Ahmed and Li, Keyu and Klein, Michael Hunter and Tarokh, Vahid and Carlson, David},
journal = {Transactions on Machine Learning Research},
year = {2025},
url = {https://mlanthology.org/tmlr/2025/liu2025tmlr-understanding/}
}