Unified Mask Embedding and Correspondence Learning for Self-Supervised Video Segmentation
Abstract
The objective of this paper is self-supervised learning of video object segmentation. We develop a unified framework which simultaneously models cross-frame dense correspondence for locally discriminative feature learning and embeds object-level context for target-mask decoding. As a result, it is able to directly learn to perform mask-guided sequential segmentation from unlabeled videos, in contrast to previous efforts usually relying on an oblique solution --- cheaply "copying" labels according to pixel-wise correlations. Concretely, our algorithm alternates between i) clustering video pixels for creating pseudo segmentation labels ex nihilo; and ii) utilizing the pseudo labels to learn mask encoding and decoding for VOS. Unsupervised correspondence learning is further incorporated into this self-taught, mask embedding scheme, so as to ensure the generic nature of the learnt representation and avoid cluster degeneracy. Our algorithm sets state-of-the-arts on two standard benchmarks (i.e., DAVIS17 and YouTube-VOS), narrowing the gap between self- and fully-supervised VOS, in terms of both performance and network architecture design. Our full code will be released.
Cite
Text
Li et al. "Unified Mask Embedding and Correspondence Learning for Self-Supervised Video Segmentation." Conference on Computer Vision and Pattern Recognition, 2023. doi:10.1109/CVPR52729.2023.01794Markdown
[Li et al. "Unified Mask Embedding and Correspondence Learning for Self-Supervised Video Segmentation." Conference on Computer Vision and Pattern Recognition, 2023.](https://mlanthology.org/cvpr/2023/li2023cvpr-unified/) doi:10.1109/CVPR52729.2023.01794BibTeX
@inproceedings{li2023cvpr-unified,
title = {{Unified Mask Embedding and Correspondence Learning for Self-Supervised Video Segmentation}},
author = {Li, Liulei and Wang, Wenguan and Zhou, Tianfei and Li, Jianwu and Yang, Yi},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2023},
pages = {18706-18716},
doi = {10.1109/CVPR52729.2023.01794},
url = {https://mlanthology.org/cvpr/2023/li2023cvpr-unified/}
}