Generation, Augmentation, and Alignment: A Pseudo-Source Domain Based Method for Source-Free Domain Adaptation

Abstract

Source-free domain adaptation (SFDA) aims to train a well-performed model in the target domain given both a trained source model and unlabeled target samples. Although achieving remarkable progress, existing SFDA methods do not explicitly reduce the distribution shift across domains, which is the key to a good adaptation. However, the absence of source samples makes it difficult to estimate and reduce domain discrepancy. Although there are no source samples available, fortunately, we find that some target samples can be used to approximate the source domain, which is denoted as the pseudo-source domain for approximatively estimating domain discrepancy. In this paper, inspired by this observation, we propose a novel method based on the pseudo-source domain to explicitly reduce the domain discrepancy even without source samples. The proposed method generates and augments the pseudo-source domain, and then employs distribution alignment with four novel losses based on pseudo-label based strategy. Thus, the domain shift can be reduced. The extensive results on three real-world datasets verify the effectiveness of the proposed method. The source code is available at https://github.com/yuntaodu/PS_code.

Cite

Text

Du et al. "Generation, Augmentation, and Alignment: A Pseudo-Source Domain Based Method for Source-Free Domain Adaptation." Machine Learning, 2024. doi:10.1007/S10994-023-06432-8

Markdown

[Du et al. "Generation, Augmentation, and Alignment: A Pseudo-Source Domain Based Method for Source-Free Domain Adaptation." Machine Learning, 2024.](https://mlanthology.org/mlj/2024/du2024mlj-generation/) doi:10.1007/S10994-023-06432-8

BibTeX

@article{du2024mlj-generation,
  title     = {{Generation, Augmentation, and Alignment: A Pseudo-Source Domain Based Method for Source-Free Domain Adaptation}},
  author    = {Du, Yuntao and Yang, Haiyang and Chen, Mingcai and Luo, Hongtao and Jiang, Juan and Xin, Yi and Wang, Chongjun},
  journal   = {Machine Learning},
  year      = {2024},
  pages     = {3611-3631},
  doi       = {10.1007/S10994-023-06432-8},
  volume    = {113},
  url       = {https://mlanthology.org/mlj/2024/du2024mlj-generation/}
}