Self-Support Few-Shot Semantic Segmentation
Abstract
Existing few-shot segmentation methods have achieved great progress based on the support-query matching framework. But they still heavily suffer from the limited coverage of intra-class variations from the few-shot supports. Motivated by the simple Gestalt principle that pixels belonging to the same object are more similar than those to different objects of same class, we propose a novel self-support matching idea to alleviate this problem. It uses query prototypes to match query features, where the query prototypes are collected from high-confidence prediction regions. This strategy can effectively capture the consistent underlying characteristics of the query objects, and thus fittingly match query features. We also propose an adaptive self-support background prototype generation module and self-support loss to further facilitate the self-support matching procedure. Our self-support network substantially improves the prototype quality, benefits more improvement from stronger backbones and more supports, and achieves SOTA on multiple datasets. Codes are at https://github.com/fanq15/SSP.
Cite
Text
Fan et al. "Self-Support Few-Shot Semantic Segmentation." Proceedings of the European Conference on Computer Vision (ECCV), 2022. doi:10.1007/978-3-031-19800-7_41Markdown
[Fan et al. "Self-Support Few-Shot Semantic Segmentation." Proceedings of the European Conference on Computer Vision (ECCV), 2022.](https://mlanthology.org/eccv/2022/fan2022eccv-selfsupport/) doi:10.1007/978-3-031-19800-7_41BibTeX
@inproceedings{fan2022eccv-selfsupport,
title = {{Self-Support Few-Shot Semantic Segmentation}},
author = {Fan, Qi and Pei, Wenjie and Tai, Yu-Wing and Tang, Chi-Keung},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2022},
doi = {10.1007/978-3-031-19800-7_41},
url = {https://mlanthology.org/eccv/2022/fan2022eccv-selfsupport/}
}