Patch-Mix Transformer for Unsupervised Domain Adaptation: A Game Perspective
Abstract
Endeavors have been recently made to leverage the vision transformer (ViT) for the challenging unsupervised domain adaptation (UDA) task. They typically adopt the cross-attention in ViT for direct domain alignment. However, as the performance of cross-attention highly relies on the quality of pseudo labels for targeted samples, it becomes less effective when the domain gap becomes large. We solve this problem from a game theory's perspective with the proposed model dubbed as PMTrans, which bridges source and target domains with an intermediate domain. Specifically, we propose a novel ViT-based module called PatchMix that effectively builds up the intermediate domain, i.e., probability distribution, by learning to sample patches from both domains based on the game-theoretical models. This way, it learns to mix the patches from the source and target domains to maximize the cross entropy (CE), while exploiting two semi-supervised mixup losses in the feature and label spaces to minimize it. As such, we interpret the process of UDA as a min-max CE game with three players, including the feature extractor, classifier, and PatchMix, to find the Nash Equilibria. Moreover, we leverage attention maps from ViT to re-weight the label of each patch by its importance, making it possible to obtain more domain-discriminative feature representations. We conduct extensive experiments on four benchmark datasets, and the results show that PMTrans significantly surpasses the ViT-based and CNN-based SoTA methods by +3.6% on Office-Home, +1.4% on Office-31, and +17.7% on DomainNet, respectively. https://vlis2022.github.io/cvpr23/PMTrans
Cite
Text
Zhu et al. "Patch-Mix Transformer for Unsupervised Domain Adaptation: A Game Perspective." Conference on Computer Vision and Pattern Recognition, 2023. doi:10.1109/CVPR52729.2023.00347Markdown
[Zhu et al. "Patch-Mix Transformer for Unsupervised Domain Adaptation: A Game Perspective." Conference on Computer Vision and Pattern Recognition, 2023.](https://mlanthology.org/cvpr/2023/zhu2023cvpr-patchmix/) doi:10.1109/CVPR52729.2023.00347BibTeX
@inproceedings{zhu2023cvpr-patchmix,
title = {{Patch-Mix Transformer for Unsupervised Domain Adaptation: A Game Perspective}},
author = {Zhu, Jinjing and Bai, Haotian and Wang, Lin},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2023},
pages = {3561-3571},
doi = {10.1109/CVPR52729.2023.00347},
url = {https://mlanthology.org/cvpr/2023/zhu2023cvpr-patchmix/}
}