Iterative Few-Shot Semantic Segmentation from Image Label Text

Abstract

Few-shot semantic segmentation aims to learn to segment unseen class objects with the guidance of only a few support images. Most previous methods rely on the pixel-level label of support images. In this paper, we focus on a more challenging setting, in which only the image-level labels are available. We propose a general framework to firstly generate coarse masks with the help of the powerful vision-language model CLIP, and then iteratively and mutually refine the mask predictions of support and query images. Extensive experiments on PASCAL-5i and COCO-20i datasets demonstrate that our method not only outperforms the state-of-the-art weakly supervised approaches by a significant margin, but also achieves comparable or better results to recent supervised methods. Moreover, our method owns an excellent generalization ability for the images in the wild and uncommon classes. Code will be available at https://github.com/Whileherham/IMR-HSNet.

Cite

Text

Wang et al. "Iterative Few-Shot Semantic Segmentation from Image Label Text." International Joint Conference on Artificial Intelligence, 2022. doi:10.24963/IJCAI.2022/193

Markdown

[Wang et al. "Iterative Few-Shot Semantic Segmentation from Image Label Text." International Joint Conference on Artificial Intelligence, 2022.](https://mlanthology.org/ijcai/2022/wang2022ijcai-iterative/) doi:10.24963/IJCAI.2022/193

BibTeX

@inproceedings{wang2022ijcai-iterative,
  title     = {{Iterative Few-Shot Semantic Segmentation from Image Label Text}},
  author    = {Wang, Haohan and Liu, Liang and Zhang, Wuhao and Zhang, Jiangning and Gan, Zhenye and Wang, Yabiao and Wang, Chengjie and Wang, Haoqian},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2022},
  pages     = {1385-1392},
  doi       = {10.24963/IJCAI.2022/193},
  url       = {https://mlanthology.org/ijcai/2022/wang2022ijcai-iterative/}
}