Learning to Associate Every Segment for Video Panoptic Segmentation

Abstract

Temporal correspondence -- linking pixels or objects across frames -- is a fundamental supervisory signal for the video models. For the panoptic understanding of dynamic scenes, we further extend this concept to every segment. Specifically, we aim to learn coarse segment-level matching and fine pixel-level matching together. We implement this idea by designing two novel learning objectives. To validate our proposals, we adopt a deep siamese model and train the model to learn the temporal correspondence on two different levels (i.e., segment and pixel) along with the target task. At inference time, the model processes each frame independently without any extra computation and post-processing. We show that our per-frame inference model can achieve new state-of-the-art results on Cityscapes-VPS and VIPER datasets. Moreover, due to its high efficiency, the model runs in a fraction of time (3x) compared to the previous state-of-the-art approach. The codes and models will be released.

Cite

Text

Woo et al. "Learning to Associate Every Segment for Video Panoptic Segmentation." Conference on Computer Vision and Pattern Recognition, 2021. doi:10.1109/CVPR46437.2021.00273

Markdown

[Woo et al. "Learning to Associate Every Segment for Video Panoptic Segmentation." Conference on Computer Vision and Pattern Recognition, 2021.](https://mlanthology.org/cvpr/2021/woo2021cvpr-learning/) doi:10.1109/CVPR46437.2021.00273

BibTeX

@inproceedings{woo2021cvpr-learning,
  title     = {{Learning to Associate Every Segment for Video Panoptic Segmentation}},
  author    = {Woo, Sanghyun and Kim, Dahun and Lee, Joon-Young and Kweon, In So},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2021},
  pages     = {2705-2714},
  doi       = {10.1109/CVPR46437.2021.00273},
  url       = {https://mlanthology.org/cvpr/2021/woo2021cvpr-learning/}
}