Semi-Supervised LiDAR Semantic Segmentation with Spatial Consistency Training
Abstract
We study the underexplored semi-supervised learning (SSL) in LiDAR semantic segmentation, as annotating LiDAR point clouds is expensive and hinders the scalability of fully-supervised methods. Our core idea is to leverage the strong spatial cues of LiDAR point clouds to better exploit unlabeled data. We propose LaserMix to mix laser beams from different LiDAR scans and encourage the model to make consistent and confident predictions before and after mixing. Our framework has three appealing properties. 1) Generic: LaserMix is agnostic to LiDAR representations hence our SSL framework can be universally applied. 2) Statistically grounded: We provide a detailed analysis to theoretically explain the applicability of the proposed framework. 3) Effective: Comprehensive experiments on popular LiDAR segmentation datasets demonstrate our effectiveness and superiority. Notably, we achieve competitive results over fully-supervised counterparts with 2x to 5x fewer labels and improve the supervised-only baseline significantly by relatively 10.8%. We hope this concise yet high-performing framework could facilitate future research in semi-supervised LiDAR segmentation.
Cite
Text
Kong et al. "Semi-Supervised LiDAR Semantic Segmentation with Spatial Consistency Training." ICLR 2023 Workshops: SR4AD, 2023.Markdown
[Kong et al. "Semi-Supervised LiDAR Semantic Segmentation with Spatial Consistency Training." ICLR 2023 Workshops: SR4AD, 2023.](https://mlanthology.org/iclrw/2023/kong2023iclrw-semisupervised/)BibTeX
@inproceedings{kong2023iclrw-semisupervised,
title = {{Semi-Supervised LiDAR Semantic Segmentation with Spatial Consistency Training}},
author = {Kong, Lingdong and Ren, Jiawei and Pan, Liang and Liu, Ziwei},
booktitle = {ICLR 2023 Workshops: SR4AD},
year = {2023},
url = {https://mlanthology.org/iclrw/2023/kong2023iclrw-semisupervised/}
}