Collaborative Vision-Text Representation Optimizing for Open-Vocabulary Segmentation
Abstract
Pre-trained vision-language models, e.g. CLIP, have been increasingly used to address the challenging Open-Vocabulary Segmentation (OVS) task, benefiting from their well-aligned vision-text embedding space. Typical solutions involve either freezing CLIP during training to unilaterally maintain its zero-shot capability, or fine-tuning CLIP vision encoder to achieve perceptual sensitivity to local regions. However, few of them incorporate vision-text collaborative optimization. Based on this, we propose the Content-Dependent Transfer to adaptively enhance each text embedding by interacting with the input image, which presents a parameter-efficient way to optimize the text representation. Besides, we additionally introduce a Representation Compensation strategy, reviewing the original CLIP-V representation as compensation to maintain the zero-shot capability of CLIP. In this way, the vision and text representation of CLIP are optimized collaboratively, enhancing the alignment of the vision-text feature space. To the best of our knowledge, we are the first to establish the collaborative vision-text optimizing mechanism within the OVS field. Extensive experiments demonstrate our method achieves superior performance on popular OVS benchmarks. In open-vocabulary semantic segmentation, our method outperforms the previous state-of-the-art approaches by +0.5, +2.3, +3.4, +0.4 and +1.1 mIoU, respectively on A-847, A-150, PC-459, PC-59 and PAS-20. Furthermore, in a panoptic setting on ADE20K, we achieve the performance of 27.1 PQ, 73.5 SQ, and 32.9 RQ. Code will be available at MAFT-Plus.
Cite
Text
Jiao et al. "Collaborative Vision-Text Representation Optimizing for Open-Vocabulary Segmentation." Proceedings of the European Conference on Computer Vision (ECCV), 2024. doi:10.1007/978-3-031-73414-4_23Markdown
[Jiao et al. "Collaborative Vision-Text Representation Optimizing for Open-Vocabulary Segmentation." Proceedings of the European Conference on Computer Vision (ECCV), 2024.](https://mlanthology.org/eccv/2024/jiao2024eccv-collaborative/) doi:10.1007/978-3-031-73414-4_23BibTeX
@inproceedings{jiao2024eccv-collaborative,
title = {{Collaborative Vision-Text Representation Optimizing for Open-Vocabulary Segmentation}},
author = {Jiao, Siyu and Zhu, Hongguang and Wei, Yunchao and Zhao, Yao and Huang, Jiannan and Shi, Humphrey},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2024},
doi = {10.1007/978-3-031-73414-4_23},
url = {https://mlanthology.org/eccv/2024/jiao2024eccv-collaborative/}
}