Unsupervised Video Domain Adaptation for Action Recognition: A Disentanglement Perspective

Abstract

Unsupervised video domain adaptation is a practical yet challenging task. In this work, for the first time, we tackle it from a disentanglement view. Our key idea is to handle the spatial and temporal domain divergence separately through disentanglement. Specifically, we consider the generation of cross-domain videos from two sets of latent factors, one encoding the static information and another encoding the dynamic information. A Transfer Sequential VAE (TranSVAE) framework is then developed to model such generation. To better serve for adaptation, we propose several objectives to constrain the latent factors. With these constraints, the spatial divergence can be readily removed by disentangling the static domain-specific information out, and the temporal divergence is further reduced from both frame- and video-levels through adversarial learning. Extensive experiments on the UCF-HMDB, Jester, and Epic-Kitchens datasets verify the effectiveness and superiority of TranSVAE compared with several state-of-the-art approaches.

Cite

Text

Wei et al. "Unsupervised Video Domain Adaptation for Action Recognition: A Disentanglement Perspective." Neural Information Processing Systems, 2023.

Markdown

[Wei et al. "Unsupervised Video Domain Adaptation for Action Recognition: A Disentanglement Perspective." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/wei2023neurips-unsupervised/)

BibTeX

@inproceedings{wei2023neurips-unsupervised,
  title     = {{Unsupervised Video Domain Adaptation for Action Recognition: A Disentanglement Perspective}},
  author    = {Wei, Pengfei and Kong, Lingdong and Qu, Xinghua and Ren, Yi and Xu, Zhiqiang and Jiang, Jing and Yin, Xiang},
  booktitle = {Neural Information Processing Systems},
  year      = {2023},
  url       = {https://mlanthology.org/neurips/2023/wei2023neurips-unsupervised/}
}