SSTVOS: Sparse Spatiotemporal Transformers for Video Object Segmentation

Abstract

In this paper we introduce a Transformer-based approach to video object segmentation (VOS). To address compounding error and scalability issues of prior work, we propose a scalable, end-to-end method for VOS called Sparse Spatiotemporal Transformers (SST). SST extracts per-pixel representations for each object in a video using sparse attention over spatiotemporal features. Our attention-based formulation for VOS allows a model to learn to attend over a history of multiple frames and provides suitable inductive bias for performing correspondence-like computations necessary for solving motion segmentation. We demonstrate the effectiveness of attention-based over recurrent networks in the spatiotemporal domain. Our method achieves competitive results on YouTube-VOS and DAVIS 2017 with improved scalability and robustness to occlusions compared with the state of the art. Code is available at https://github.com/dukebw/SSTVOS.

Cite

Text

Duke et al. "SSTVOS: Sparse Spatiotemporal Transformers for Video Object Segmentation." Conference on Computer Vision and Pattern Recognition, 2021. doi:10.1109/CVPR46437.2021.00585

Markdown

[Duke et al. "SSTVOS: Sparse Spatiotemporal Transformers for Video Object Segmentation." Conference on Computer Vision and Pattern Recognition, 2021.](https://mlanthology.org/cvpr/2021/duke2021cvpr-sstvos/) doi:10.1109/CVPR46437.2021.00585

BibTeX

@inproceedings{duke2021cvpr-sstvos,
  title     = {{SSTVOS: Sparse Spatiotemporal Transformers for Video Object Segmentation}},
  author    = {Duke, Brendan and Ahmed, Abdalla and Wolf, Christian and Aarabi, Parham and Taylor, Graham W.},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2021},
  pages     = {5912-5921},
  doi       = {10.1109/CVPR46437.2021.00585},
  url       = {https://mlanthology.org/cvpr/2021/duke2021cvpr-sstvos/}
}