SCVRL: Shuffled Contrastive Video Representation Learning

Abstract

We propose SCVRL, a novel contrastive-based framework for self-supervised learning for videos. Differently from previous contrast learning based methods that mostly focus on learning visual semantics (e.g., CVRL), SCVRL is capable of learning both semantic and motion patterns. For that, we reformulate the popular shuffling pretext task within a modern contrastive learning paradigm. We show that our transformer-based network has a natural capacity to learn motion in self-supervised settings and achieves strong performance, outperforming CVRL on four benchmarks.

Cite

Text

Dorkenwald et al. "SCVRL: Shuffled Contrastive Video Representation Learning." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2022. doi:10.1109/CVPRW56347.2022.00458

Markdown

[Dorkenwald et al. "SCVRL: Shuffled Contrastive Video Representation Learning." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2022.](https://mlanthology.org/cvprw/2022/dorkenwald2022cvprw-scvrl/) doi:10.1109/CVPRW56347.2022.00458

BibTeX

@inproceedings{dorkenwald2022cvprw-scvrl,
  title     = {{SCVRL: Shuffled Contrastive Video Representation Learning}},
  author    = {Dorkenwald, Michael and Xiao, Fanyi and Brattoli, Biagio and Tighe, Joseph and Modolo, Davide},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2022},
  pages     = {4131-4140},
  doi       = {10.1109/CVPRW56347.2022.00458},
  url       = {https://mlanthology.org/cvprw/2022/dorkenwald2022cvprw-scvrl/}
}