Unsupervised Video Domain Adaptation with Masked Pre-Training and Collaborative Self-Training

Abstract

In this work we tackle the problem of unsupervised domain adaptation (UDA) for video action recognition. Our approach which we call UNITE uses an image teacher model to adapt a video student model to the target domain. UNITE first employs self-supervised pre-training to promote discriminative feature learning on target domain videos using a teacher-guided masked distillation objective. We then perform self-training on masked target data using the video student model and image teacher model together to generate improved pseudolabels for unlabeled target videos. Our self-training process successfully leverages the strengths of both models to achieve strong transfer performance across domains. We evaluate our approach on multiple video domain adaptation benchmarks and observe significant improvements upon previously reported results.

Cite

Text

Reddy et al. "Unsupervised Video Domain Adaptation with Masked Pre-Training and Collaborative Self-Training." Conference on Computer Vision and Pattern Recognition, 2024. doi:10.1109/CVPR52733.2024.01790

Markdown

[Reddy et al. "Unsupervised Video Domain Adaptation with Masked Pre-Training and Collaborative Self-Training." Conference on Computer Vision and Pattern Recognition, 2024.](https://mlanthology.org/cvpr/2024/reddy2024cvpr-unsupervised/) doi:10.1109/CVPR52733.2024.01790

BibTeX

@inproceedings{reddy2024cvpr-unsupervised,
  title     = {{Unsupervised Video Domain Adaptation with Masked Pre-Training and Collaborative Self-Training}},
  author    = {Reddy, Arun and Paul, William and Rivera, Corban and Shah, Ketul and de Melo, Celso M. and Chellappa, Rama},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2024},
  pages     = {18919-18929},
  doi       = {10.1109/CVPR52733.2024.01790},
  url       = {https://mlanthology.org/cvpr/2024/reddy2024cvpr-unsupervised/}
}