Soft Self-Labeling and Potts Relaxations for Weakly-Supervised Segmentation
Abstract
We consider weakly supervised segmentation where only a fraction of pixels have ground truth labels (scribbles) and focus on a self-labeling approach optimizing relaxations of the standard unsupervised CRF/Potts loss on unlabeled pixels. While WSSS methods can directly optimize such losses via gradient descent, prior work suggests that higher-order optimization can improve network training by introducing hidden pseudo-labels and powerful CRF sub-problem solvers, e.g. graph cut. However, previously used hard pseudo-labels can not represent class uncertainty or errors, which motivates soft self-labeling. We derive a principled auxiliary loss and systematically evaluate standard and new CRF relaxations (convex and non-convex), neighborhood systems, and terms connecting network predictions with soft pseudo-labels. We also propose a general continuous sub-problem solver. Using only standard architectures, soft self-labeling consistently improves scribble-based training and outperforms significantly more complex specialized WSSS systems. It can outperform full pixel-precise supervision. Our general ideas apply to other weakly-supervised problems/systems.
Cite
Text
Zhang and Boykov. "Soft Self-Labeling and Potts Relaxations for Weakly-Supervised Segmentation." Conference on Computer Vision and Pattern Recognition, 2025. doi:10.1109/CVPR52734.2025.01885Markdown
[Zhang and Boykov. "Soft Self-Labeling and Potts Relaxations for Weakly-Supervised Segmentation." Conference on Computer Vision and Pattern Recognition, 2025.](https://mlanthology.org/cvpr/2025/zhang2025cvpr-soft/) doi:10.1109/CVPR52734.2025.01885BibTeX
@inproceedings{zhang2025cvpr-soft,
title = {{Soft Self-Labeling and Potts Relaxations for Weakly-Supervised Segmentation}},
author = {Zhang, Zhongwen and Boykov, Yuri},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2025},
pages = {20244-20253},
doi = {10.1109/CVPR52734.2025.01885},
url = {https://mlanthology.org/cvpr/2025/zhang2025cvpr-soft/}
}