VITA: Video Instance Segmentation via Object Token Association
Abstract
We introduce a novel paradigm for offline Video Instance Segmentation (VIS), based on the hypothesis that explicit object-oriented information can be a strong clue for understanding the context of the entire sequence. To this end, we propose VITA, a simple structure built on top of an off-the-shelf Transformer-based image instance segmentation model. Specifically, we use an image object detector as a means of distilling object-specific contexts into object tokens. VITA accomplishes video-level understanding by associating frame-level object tokens without using spatio-temporal backbone features. By effectively building relationships between objects using the condensed information, VITA achieves the state-of-the-art on VIS benchmarks with a ResNet-50 backbone: 49.8 AP, 45.7 AP on YouTube-VIS 2019 & 2021, and 19.6 AP on OVIS. Moreover, thanks to its object token-based structure that is disjoint from the backbone features, VITA shows several practical advantages that previous offline VIS methods have not explored - handling long and high-resolution videos with a common GPU, and freezing a frame-level detector trained on image domain. Code is available at the link.
Cite
Text
Heo et al. "VITA: Video Instance Segmentation via Object Token Association." Neural Information Processing Systems, 2022.Markdown
[Heo et al. "VITA: Video Instance Segmentation via Object Token Association." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/heo2022neurips-vita/)BibTeX
@inproceedings{heo2022neurips-vita,
title = {{VITA: Video Instance Segmentation via Object Token Association}},
author = {Heo, Miran and Hwang, Sukjun and Oh, Seoung Wug and Lee, Joon-Young and Kim, Seon Joo},
booktitle = {Neural Information Processing Systems},
year = {2022},
url = {https://mlanthology.org/neurips/2022/heo2022neurips-vita/}
}