Effective SAM Combination for Open-Vocabulary Semantic Segmentation
Abstract
Open-vocabulary semantic segmentation aims to assign pixel-level labels to images across an unlimited range of classes. Traditional methods address this by sequentially connecting a powerful mask proposal generator, such as the Segment Anything Model (SAM), with a pre-trained vision-language model like CLIP. But these two-stage approaches often suffer from high computational costs, memory inefficiencies. In this paper, we propose ESC-Net, a novel one-stage open-vocabulary segmentation model that leverages the SAM decoder blocks for class-agnostic segmentation within an efficient inference framework. By embedding pseudo prompts generated from image-text correlations into SAM's promptable segmentation framework, ESC-Net achieves refined spatial aggregation for accurate mask predictions. Additionally, a Vision-Language Fusion (VLF) module enhances the final mask prediction through image and text guidance. ESC-Net achieves superior performance on standard benchmarks, including ADE20K, PASCAL-VOC, and PASCAL-Context, outperforming prior methods in both efficiency and accuracy. Comprehensive ablation studies further demonstrate its robustness across challenging conditions.
Cite
Text
Lee et al. "Effective SAM Combination for Open-Vocabulary Semantic Segmentation." Conference on Computer Vision and Pattern Recognition, 2025. doi:10.1109/CVPR52734.2025.02429Markdown
[Lee et al. "Effective SAM Combination for Open-Vocabulary Semantic Segmentation." Conference on Computer Vision and Pattern Recognition, 2025.](https://mlanthology.org/cvpr/2025/lee2025cvpr-effective/) doi:10.1109/CVPR52734.2025.02429BibTeX
@inproceedings{lee2025cvpr-effective,
title = {{Effective SAM Combination for Open-Vocabulary Semantic Segmentation}},
author = {Lee, Minhyeok and Cho, Suhwan and Lee, Jungho and Yang, Sunghun and Choi, Heeseung and Kim, Ig-Jae and Lee, Sangyoun},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2025},
pages = {26081-26090},
doi = {10.1109/CVPR52734.2025.02429},
url = {https://mlanthology.org/cvpr/2025/lee2025cvpr-effective/}
}