On Unsupervised Domain Adaptation: Pseudo Label Guided Mixup for Adversarial Prompt Tuning

Abstract

To date, a backbone of methods for unsupervised domain adaptation (UDA) involves learning label-discriminative features via a label classifier and domain-invariant features through a domain discriminator in an adversarial scheme. However, these methods lack explicit control for aligning the source data and target data within the same label class, degrading the classifier's performance in the target domain. In this paper, we propose PL-Mix, a pseudo label guided Mixup method based on adversarial prompt tuning. Specifically, our PL-Mix facilitates class-dependent alignment and can alleviate the impact of noisy pseudo-labels. We then theoretically justify that PL-Mix can improve the generalization for UDA. Extensive experiments of the comparison with existing models also demonstrate the effectiveness of PL-Mix.

Cite

Text

Kong et al. "On Unsupervised Domain Adaptation: Pseudo Label Guided Mixup for Adversarial Prompt Tuning." AAAI Conference on Artificial Intelligence, 2024. doi:10.1609/AAAI.V38I16.29800

Markdown

[Kong et al. "On Unsupervised Domain Adaptation: Pseudo Label Guided Mixup for Adversarial Prompt Tuning." AAAI Conference on Artificial Intelligence, 2024.](https://mlanthology.org/aaai/2024/kong2024aaai-unsupervised/) doi:10.1609/AAAI.V38I16.29800

BibTeX

@inproceedings{kong2024aaai-unsupervised,
  title     = {{On Unsupervised Domain Adaptation: Pseudo Label Guided Mixup for Adversarial Prompt Tuning}},
  author    = {Kong, Fanshuang and Zhang, Richong and Wang, Ziqiao and Mao, Yongyi},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2024},
  pages     = {18399-18407},
  doi       = {10.1609/AAAI.V38I16.29800},
  url       = {https://mlanthology.org/aaai/2024/kong2024aaai-unsupervised/}
}