Shape2Scene: 3D Scene Representation Learning Through Pre-Training on Shape Data
Abstract
Current 3D self-supervised learning methods of 3D scenes face a data desert issue, resulting from the time-consuming and expensive collecting process of 3D scene data. Conversely, 3D shape datasets are easier to collect. Despite this, existing pre-training strategies on shape data offer limited potential for 3D scene understanding due to significant disparities in point quantities. To tackle these challenges, we propose Shape2Scene (S2S), a novel method that learns representations of large-scale 3D scenes from 3D shape data. We first design multi-scale and high-resolution backbones for shape and scene level 3D tasks, , MH-P (point-based) and MH-V (voxel-based). MH-P/V establishes direct paths to high-resolution features that capture deep semantic information across multiple scales. This pivotal nature makes them suitable for a wide range of 3D downstream tasks that tightly rely on high-resolution features. We then employ a Shape-to-Scene strategy (S2SS) to amalgamate points from various shapes, creating a random pseudo scene (comprising multiple objects) for training data, mitigating disparities between shapes and scenes. Finally, a point-point contrastive loss (PPC) is applied for the pre-training of MH-P/V. In PPC, the inherent correspondence (, point pairs) is naturally obtained in S2SS. Extensive experiments have demonstrated the transferability of 3D representations learned by MH-P/V across shape-level and scene-level 3D tasks. MH-P achieves notable performance on well-known point cloud datasets (93.8% OA on ScanObjectNN and 87.6% instance mIoU on ShapeNetPart). MH-V also achieves promising performance in 3D semantic segmentation and 3D object detection.
Cite
Text
Feng et al. "Shape2Scene: 3D Scene Representation Learning Through Pre-Training on Shape Data." Proceedings of the European Conference on Computer Vision (ECCV), 2024. doi:10.1007/978-3-031-73001-6_5Markdown
[Feng et al. "Shape2Scene: 3D Scene Representation Learning Through Pre-Training on Shape Data." Proceedings of the European Conference on Computer Vision (ECCV), 2024.](https://mlanthology.org/eccv/2024/feng2024eccv-shape2scene/) doi:10.1007/978-3-031-73001-6_5BibTeX
@inproceedings{feng2024eccv-shape2scene,
title = {{Shape2Scene: 3D Scene Representation Learning Through Pre-Training on Shape Data}},
author = {Feng, Tuo and Wang, Wenguan and Quan, Ruijie and Yang, Yi},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2024},
doi = {10.1007/978-3-031-73001-6_5},
url = {https://mlanthology.org/eccv/2024/feng2024eccv-shape2scene/}
}