TWIST: Two-Way Inter-Label Self-Training for Semi-Supervised 3D Instance Segmentation
Abstract
We explore the way to alleviate the label-hungry problem in a semi-supervised setting for 3D instance segmentation. To leverage the unlabeled data to boost model performance, we present a novel Two-Way Inter-label Self-Training framework named TWIST. It exploits inherent correlations between semantic understanding and instance information of a scene. Specifically, we consider two kinds of pseudo labels for semantic- and instance-level supervision. Our key design is to provide object-level information for denoising pseudo labels and make use of their correlation for two-way mutual enhancement, thereby iteratively promoting the pseudo-label qualities. TWIST attains leading performance on both ScanNet and S3DIS, compared to recent 3D pre-training approaches, and can cooperate with them to further enhance performance, e.g., +4.4% AP50 on 1%-label ScanNet data-efficient benchmark. Code is available at https://github.com/dvlab-research/TWIST.
Cite
Text
Chu et al. "TWIST: Two-Way Inter-Label Self-Training for Semi-Supervised 3D Instance Segmentation." Conference on Computer Vision and Pattern Recognition, 2022. doi:10.1109/CVPR52688.2022.00117Markdown
[Chu et al. "TWIST: Two-Way Inter-Label Self-Training for Semi-Supervised 3D Instance Segmentation." Conference on Computer Vision and Pattern Recognition, 2022.](https://mlanthology.org/cvpr/2022/chu2022cvpr-twist/) doi:10.1109/CVPR52688.2022.00117BibTeX
@inproceedings{chu2022cvpr-twist,
title = {{TWIST: Two-Way Inter-Label Self-Training for Semi-Supervised 3D Instance Segmentation}},
author = {Chu, Ruihang and Ye, Xiaoqing and Liu, Zhengzhe and Tan, Xiao and Qi, Xiaojuan and Fu, Chi-Wing and Jia, Jiaya},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2022},
pages = {1100-1109},
doi = {10.1109/CVPR52688.2022.00117},
url = {https://mlanthology.org/cvpr/2022/chu2022cvpr-twist/}
}