S4M: Boosting Semi-Supervised Instance Segmentation with SAM
Abstract
Semi-supervised instance segmentation poses challenges due to limited labeled data, causing difficulties in accurately localizing distinct object instances. Current teacher-student frameworks still suffer from performance constraints due to unreliable pseudo-label quality stemming from limited labeled data. While the Segment Anything Model (SAM) offers robust segmentation capabilities at various granularities, directly applying SAM introduces challenges such as class-agnostic predictions and potential over-segmentation. To address these complexities, we carefully integrate SAM into the semi-supervised instance segmentation framework, developing a novel distillation method that effectively captures the precise localization capabilities of SAM without compromising semantic recognition. Furthermore, we incorporate pseudo-label refinement as well as a specialized data augmentation with the refined pseudo-labels, resulting in superior performance. We establish state-of-the-art performance, and provide comprehensive experiments and ablation studies to validate the effectiveness of our proposed approach.
Cite
Text
Yoon et al. "S4M: Boosting Semi-Supervised Instance Segmentation with SAM." International Conference on Computer Vision, 2025.Markdown
[Yoon et al. "S4M: Boosting Semi-Supervised Instance Segmentation with SAM." International Conference on Computer Vision, 2025.](https://mlanthology.org/iccv/2025/yoon2025iccv-s4m/)BibTeX
@inproceedings{yoon2025iccv-s4m,
title = {{S4M: Boosting Semi-Supervised Instance Segmentation with SAM}},
author = {Yoon, Heeji and Shin, Heeseong and Hong, Eunbeen and Choi, Hyunwook and Cho, Hansang and Jeong, Daun and Kim, Seungryong},
booktitle = {International Conference on Computer Vision},
year = {2025},
pages = {20226-20236},
url = {https://mlanthology.org/iccv/2025/yoon2025iccv-s4m/}
}