Multi-Granularity Video Object Segmentation
Abstract
Current benchmarks for video segmentation are limited to annotating only salient objects (i.e., foreground instances). Despite their impressive architectural designs, previous works trained on these benchmarks have struggled to adapt to realworld scenarios. Thus, developing a new video segmentation dataset aimed at tracking multi-granularity segmentation target in the video scene is necessary. In this work, we aim to generate multi-granularity video segmentation dataset that is annotated for both salient and non-salient masks. To achieve this, we propose a large-scale, densely annotated multi-granularity video object segmentation (MUG-VOS) dataset that includes various types and granularities of mask annotations. We automatically collected a training set that assists in tracking both salient and non-salient objects, and we also curated a human-annotated test set for reliable evaluation. In addition, we present memory-based mask propagation model (MMPM), trained and evaluated on MUG-VOS dataset, which leads to the best performance among the existing video object segmentation methods and Segment SAM-based video segmentation methods.
Cite
Text
Lim et al. "Multi-Granularity Video Object Segmentation." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I5.32552Markdown
[Lim et al. "Multi-Granularity Video Object Segmentation." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/lim2025aaai-multi/) doi:10.1609/AAAI.V39I5.32552BibTeX
@inproceedings{lim2025aaai-multi,
title = {{Multi-Granularity Video Object Segmentation}},
author = {Lim, Sangbeom and Kim, Seongchan and An, Seungjun and Cho, Seokju and Seo, Paul Hongsuck and Kim, Seungryong},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2025},
pages = {5200-5208},
doi = {10.1609/AAAI.V39I5.32552},
url = {https://mlanthology.org/aaai/2025/lim2025aaai-multi/}
}