Pointly-Supervised Panoptic Segmentation
Abstract
In this paper, we propose a new approach to applying point-level annotations for weakly-supervised panoptic segmentation. Instead of the dense pixel-level labels used by fully supervised methods, point-level labels only provide a single point for each target as supervision, significantly reducing the annotation burden. We formulate the problem in an end-to-end framework by simultaneously generating panoptic pseudo-masks from point-level labels and learning from them. To tackle the core challenge, i.e., panoptic pseudo-mask generation, we propose a principled approach to parsing pixels by minimizing pixel-to-point traversing costs, which model semantic similarity, low-level texture cues, and high-level manifold knowledge to discriminate panoptic targets. We conduct experiments on the Pascal VOC and the MS COCO datasets to demonstrate the approach’s effectiveness and show state-of-the-art performance in the weakly-supervised panoptic segmentation problem. Codes are available at https://github.com/BraveGroup/PSPS.git.
Cite
Text
Fan et al. "Pointly-Supervised Panoptic Segmentation." Proceedings of the European Conference on Computer Vision (ECCV), 2022. doi:10.1007/978-3-031-20056-4_19Markdown
[Fan et al. "Pointly-Supervised Panoptic Segmentation." Proceedings of the European Conference on Computer Vision (ECCV), 2022.](https://mlanthology.org/eccv/2022/fan2022eccv-pointlysupervised/) doi:10.1007/978-3-031-20056-4_19BibTeX
@inproceedings{fan2022eccv-pointlysupervised,
title = {{Pointly-Supervised Panoptic Segmentation}},
author = {Fan, Junsong and Zhang, Zhaoxiang and Tan, Tieniu},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2022},
doi = {10.1007/978-3-031-20056-4_19},
url = {https://mlanthology.org/eccv/2022/fan2022eccv-pointlysupervised/}
}