CAT-Seg: Cost Aggregation for Open-Vocabulary Semantic Segmentation

Abstract

Open-vocabulary semantic segmentation presents the challenge of labeling each pixel within an image based on a wide range of text descriptions. In this work we introduce a novel cost-based approach to adapt vision-language foundation models notably CLIP for the intricate task of semantic segmentation. Through aggregating the cosine similarity score i.e. the cost volume between image and text embeddings our method potently adapts CLIP for segmenting seen and unseen classes by fine-tuning its encoders addressing the challenges faced by existing methods in handling unseen classes. Building upon this we explore methods to effectively aggregate the cost volume considering its multi-modal nature of being established between image and text embeddings. Furthermore we examine various methods for efficiently fine-tuning CLIP.

Cite

Text

Cho et al. "CAT-Seg: Cost Aggregation for Open-Vocabulary Semantic Segmentation." Conference on Computer Vision and Pattern Recognition, 2024. doi:10.1109/CVPR52733.2024.00394

Markdown

[Cho et al. "CAT-Seg: Cost Aggregation for Open-Vocabulary Semantic Segmentation." Conference on Computer Vision and Pattern Recognition, 2024.](https://mlanthology.org/cvpr/2024/cho2024cvpr-catseg/) doi:10.1109/CVPR52733.2024.00394

BibTeX

@inproceedings{cho2024cvpr-catseg,
  title     = {{CAT-Seg: Cost Aggregation for Open-Vocabulary Semantic Segmentation}},
  author    = {Cho, Seokju and Shin, Heeseong and Hong, Sunghwan and Arnab, Anurag and Seo, Paul Hongsuck and Kim, Seungryong},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2024},
  pages     = {4113-4123},
  doi       = {10.1109/CVPR52733.2024.00394},
  url       = {https://mlanthology.org/cvpr/2024/cho2024cvpr-catseg/}
}