A Semantic Space Is Worth 256 Language Descriptions: Make Stronger Segmentation Models with Descriptive Properties

Abstract

We introduce ProLab, a novel approach using property-level label space for creating strong interpretable segmentation models. Instead of relying solely on category-specific annotations, ProLab uses descriptive properties grounded in common sense knowledge for supervising segmentation models. It is based on two core designs. First, we employ Large Language Models (LLMs) and carefully crafted prompts to generate descriptions of all involved categories that carry meaningful common sense knowledge and follow a structured format. Second, we introduce a description embedding model preserving semantic correlation across descriptions and then cluster them into a set of descriptive properties (, 256) using K-Means. These properties are based on interpretable common sense knowledge consistent with theories of human recognition. We empirically show that our approach makes segmentation models perform stronger on five classic benchmarks (, ADE20K, COCO-Stuff, Pascal Context, Cityscapes and BDD). Our method also shows better scalability with extended training steps than category-level supervision. Our interpretable segmentation framework also emerges with the generalization ability to segment out-of-domain or unknown categories using in-domain descriptive properties. Code is available at https: //github.com/lambert-x/ProLab.

Cite

Text

Xiao et al. "A Semantic Space Is Worth 256 Language Descriptions: Make Stronger Segmentation Models with Descriptive Properties." Proceedings of the European Conference on Computer Vision (ECCV), 2024. doi:10.1007/978-3-031-72920-1_14

Markdown

[Xiao et al. "A Semantic Space Is Worth 256 Language Descriptions: Make Stronger Segmentation Models with Descriptive Properties." Proceedings of the European Conference on Computer Vision (ECCV), 2024.](https://mlanthology.org/eccv/2024/xiao2024eccv-semantic/) doi:10.1007/978-3-031-72920-1_14

BibTeX

@inproceedings{xiao2024eccv-semantic,
  title     = {{A Semantic Space Is Worth 256 Language Descriptions: Make Stronger Segmentation Models with Descriptive Properties}},
  author    = {Xiao, Junfei and Zhou, Ziqi and Li, Wenxuan and Lan, Shiyi and Mei, Jieru and Yu, Zhiding and Zhao, Bingchen and Yuille, Alan and Zhou, Yuyin and Xie, Cihang},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2024},
  doi       = {10.1007/978-3-031-72920-1_14},
  url       = {https://mlanthology.org/eccv/2024/xiao2024eccv-semantic/}
}