STC: Spatio-Temporal Contrastive Learning for Video Instance Segmentation

Abstract

Video Instance Segmentation (VIS) is a task that simultaneously requires classification, segmentation, and instance association in a video. Recent VIS approaches rely on sophisticated pipelines to achieve this goal, including RoI-related operations or 3D convolutions. In contrast, we present a simple and efficient single-stage VIS framework based on the instance segmentation method CondInst by adding an extra tracking head. To improve instance association accuracy, a novel bi-directional spatio-temporal contrastive learning strategy for tracking embedding across frames is proposed. Moreover, an instance-wise temporal consistency scheme is utilized to produce temporally coherent results. Experiments conducted on the YouTube-VIS-2019, YouTube-VIS-2021, and OVIS-2021 datasets validate the effectiveness and efficiency of the proposed method. We hope the proposed framework can serve as a simple and strong alternative for many other instance-level video association tasks.

Cite

Text

Jiang et al. "STC: Spatio-Temporal Contrastive Learning for Video Instance Segmentation." European Conference on Computer Vision Workshops, 2022. doi:10.1007/978-3-031-25069-9_35

Markdown

[Jiang et al. "STC: Spatio-Temporal Contrastive Learning for Video Instance Segmentation." European Conference on Computer Vision Workshops, 2022.](https://mlanthology.org/eccvw/2022/jiang2022eccvw-stc/) doi:10.1007/978-3-031-25069-9_35

BibTeX

@inproceedings{jiang2022eccvw-stc,
  title     = {{STC: Spatio-Temporal Contrastive Learning for Video Instance Segmentation}},
  author    = {Jiang, Zhengkai and Gu, Zhangxuan and Peng, Jinlong and Zhou, Hang and Liu, Liang and Wang, Yabiao and Tai, Ying and Wang, Chengjie and Zhang, Liqing},
  booktitle = {European Conference on Computer Vision Workshops},
  year      = {2022},
  pages     = {539-556},
  doi       = {10.1007/978-3-031-25069-9_35},
  url       = {https://mlanthology.org/eccvw/2022/jiang2022eccvw-stc/}
}