Universal Semi-Supervised Model Adaptation via Collaborative Consistency Training

Abstract

In this paper, we introduce a realistic and challenging domain adaptation problem called Universal Semi-supervised Model Adaptation (USMA), which i) requires only a pre-trained source model, ii) allows the source and target domain to have different label sets, i.e., they share a common label set and hold their own private label set, and iii) requires only a few labeled samples in each class of the target domain. To address USMA, we propose a collaborative consistency training framework that regularizes the prediction consistency between two models, i.e., a pre-trained source model and its variant pre-trained with target data only, and combines their complementary strengths to learn a more powerful model. The rationale of our framework stems from the observation that the source model performs better on common categories than the target-only model, while on target-private categories, the target-only model performs better. We also propose a two-perspective, i.e., sample-wise and class-wise, consistency regularization to improve the training. Experimental results demonstrate the effectiveness of our method on several benchmark datasets.

Cite

Text

Yan et al. "Universal Semi-Supervised Model Adaptation via Collaborative Consistency Training." Winter Conference on Applications of Computer Vision, 2024.

Markdown

[Yan et al. "Universal Semi-Supervised Model Adaptation via Collaborative Consistency Training." Winter Conference on Applications of Computer Vision, 2024.](https://mlanthology.org/wacv/2024/yan2024wacv-universal/)

BibTeX

@inproceedings{yan2024wacv-universal,
  title     = {{Universal Semi-Supervised Model Adaptation via Collaborative Consistency Training}},
  author    = {Yan, Zizheng and Wu, Yushuang and Qin, Yipeng and Han, Xiaoguang and Cui, Shuguang and Li, Guanbin},
  booktitle = {Winter Conference on Applications of Computer Vision},
  year      = {2024},
  pages     = {872-882},
  url       = {https://mlanthology.org/wacv/2024/yan2024wacv-universal/}
}