TransFGU: A Top-Down Approach to Fine-Grained Unsupervised Semantic Segmentation

Abstract

Unsupervised semantic segmentation aims to obtain high-level semantic representation on low-level visual features without manual annotations. Most existing methods are bottom-up approaches that try to group pixels into regions based on their visual cues or certain predefined rules. As a result, it is difficult for these bottom-up approaches to generate fine-grained semantic segmentation when coming to complicated scenes with multiple objects and some objects sharing similar visual appearance. In contrast, we propose the first top-down unsupervised semantic segmentation framework for fine-grained segmentation in extremely complicated scenarios. Specifically, we first obtain rich high-level structured semantic concept information from large-scale vision data in a self-supervised learning manner, and use such information as a prior to discover potential semantic categories presented in target datasets. Secondly, the discovered high-level semantic categories are mapped to low-level pixel features by calculating the class activate map (CAM) with respect to certain discovered semantic representation. Lastly, the obtained CAMs serve as pseudo labels to train the segmentation module and produce final semantic segmentation. Experimental results on multiple semantic segmentation benchmarks show that our top-down unsupervised segmentation is robust to both object-centric and scene-centric datasets under different settings of semantic granularity level, and outperforms all the current state-of-the-art bottom-up methods.

Cite

Text

Yin et al. "TransFGU: A Top-Down Approach to Fine-Grained Unsupervised Semantic Segmentation." Proceedings of the European Conference on Computer Vision (ECCV), 2022. doi:10.1007/978-3-031-19818-2_5

Markdown

[Yin et al. "TransFGU: A Top-Down Approach to Fine-Grained Unsupervised Semantic Segmentation." Proceedings of the European Conference on Computer Vision (ECCV), 2022.](https://mlanthology.org/eccv/2022/yin2022eccv-transfgu/) doi:10.1007/978-3-031-19818-2_5

BibTeX

@inproceedings{yin2022eccv-transfgu,
  title     = {{TransFGU: A Top-Down Approach to Fine-Grained Unsupervised Semantic Segmentation}},
  author    = {Yin, Zhaoyuan and Wang, Pichao and Wang, Fan and Xu, Xianzhe and Zhang, Hanling and Li, Hao and Jin, Rong},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2022},
  doi       = {10.1007/978-3-031-19818-2_5},
  url       = {https://mlanthology.org/eccv/2022/yin2022eccv-transfgu/}
}