A Unified Domain Adaptation Framework with Distinctive Divergence Analysis
Abstract
Unsupervised domain adaptation enables knowledge transfer from a labeled source domain to an unlabeled target domain by aligning the learnt features of both domains. The idea is theoretically supported by the generalization bound analysis in Ben-David et al. (2007), which specifies the applicable task (binary classification) and designates a specific distribution divergence measure. Although most distribution-aligning domain adaptation models seek theoretical grounds from this particular bound analysis, they do not actually fit into the stringent conditions. In this paper, we bridge the long-standing theoretical gap in literature by providing a unified generalization bound. Our analysis can well accommodate the classification/regression tasks and most commonly-used divergence measures, and more importantly, it can theoretically recover a large amount of previous models. In addition, we identify the key difference in the distribution divergence measures underlying the diverse models and commit a comprehensive in-depth comparison of the commonly-used divergence measures. Based on the unified generalization bound, we propose new domain adaptation models that achieve transferability through domain-invariant representations and conduct experiments on real-world datasets that corroborate our theoretical findings. We believe these insights are helpful in guiding the future design of distribution-aligning domain adaptation algorithms.
Cite
Text
Yuan et al. "A Unified Domain Adaptation Framework with Distinctive Divergence Analysis." Transactions on Machine Learning Research, 2022.Markdown
[Yuan et al. "A Unified Domain Adaptation Framework with Distinctive Divergence Analysis." Transactions on Machine Learning Research, 2022.](https://mlanthology.org/tmlr/2022/yuan2022tmlr-unified/)BibTeX
@article{yuan2022tmlr-unified,
title = {{A Unified Domain Adaptation Framework with Distinctive Divergence Analysis}},
author = {Yuan, Zhiri and Hu, Xixu and Wu, Qi and Ma, Shumin and Leung, Cheuk Hang and Shen, Xin and Huang, Yiyan},
journal = {Transactions on Machine Learning Research},
year = {2022},
url = {https://mlanthology.org/tmlr/2022/yuan2022tmlr-unified/}
}