Efficient 3D Semantic Segmentation with Superpoint Transformer

Abstract

We introduce a novel superpoint-based transformer architecture for efficient semantic segmentation of large-scale 3D scenes. Our method incorporates a fast algorithm to partition point clouds into a hierarchical superpoint structure, which makes our preprocessing 7 times faster than existing superpoint-based approaches. Additionally, we leverage a self-attention mechanism to capture the relationships between superpoints at multiple scales, leading to state-of-the-art performance on three challenging benchmark datasets: S3DIS (76.0% mIoU 6-fold validation), KITTI-360 (63.5% on Val), and DALES (79.6%). With only 212k parameters, our approach is up to 200 times more compact than other state-of-the-art models while maintaining similar performance. Furthermore, our model can be trained on a single GPU in 3 hours for a fold of the S3DIS dataset, which is 7x to 70x fewer GPU-hours than the best-performing methods. Our code and models are accessible at github.com/drprojects/superpoint_transformer.

Cite

Text

Robert et al. "Efficient 3D Semantic Segmentation with Superpoint Transformer." International Conference on Computer Vision, 2023. doi:10.1109/ICCV51070.2023.01577

Markdown

[Robert et al. "Efficient 3D Semantic Segmentation with Superpoint Transformer." International Conference on Computer Vision, 2023.](https://mlanthology.org/iccv/2023/robert2023iccv-efficient/) doi:10.1109/ICCV51070.2023.01577

BibTeX

@inproceedings{robert2023iccv-efficient,
  title     = {{Efficient 3D Semantic Segmentation with Superpoint Transformer}},
  author    = {Robert, Damien and Raguet, Hugo and Landrieu, Loic},
  booktitle = {International Conference on Computer Vision},
  year      = {2023},
  pages     = {17195-17204},
  doi       = {10.1109/ICCV51070.2023.01577},
  url       = {https://mlanthology.org/iccv/2023/robert2023iccv-efficient/}
}