3DMV: Joint 3D-Multi-View Prediction for 3D Semantic Scene Segmentation
Abstract
We present 3DMV, a novel method for 3D semantic scene segmentation of RGB-D scans using a joint 3D-multi-view prediction network. In contrast to existing methods that either use geometry or RGB data as input for this task, we combine both data modalities in a joint, end-to-end network architecture. Rather than simply projecting color data into a volumetric grid and operating solely in 3D -- which would result in insufficient detail -- we first extract feature maps from associated RGB images. These features are then directly projected into the volumetric feature grid of a 3D network using a differentiable backprojection layer. Since our target is 3D scanning scenarios with possibly many frames, we use a multi-view pooling approach in order to handle a varying number of RGB input views. This learned combination of RGB and geometric features with our joint 2D-3D architecture achieves significantly better results than existing baselines. For instance, our final result on the ScanNet 3D segmentation benchmark increases from 52.8% to 75% accuracy compared to existing volumetric architectures.
Cite
Text
Dai and Niessner. "3DMV: Joint 3D-Multi-View Prediction for 3D Semantic Scene Segmentation." Proceedings of the European Conference on Computer Vision (ECCV), 2018. doi:10.1007/978-3-030-01249-6_28Markdown
[Dai and Niessner. "3DMV: Joint 3D-Multi-View Prediction for 3D Semantic Scene Segmentation." Proceedings of the European Conference on Computer Vision (ECCV), 2018.](https://mlanthology.org/eccv/2018/dai2018eccv-3dmv/) doi:10.1007/978-3-030-01249-6_28BibTeX
@inproceedings{dai2018eccv-3dmv,
title = {{3DMV: Joint 3D-Multi-View Prediction for 3D Semantic Scene Segmentation}},
author = {Dai, Angela and Niessner, Matthias},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2018},
doi = {10.1007/978-3-030-01249-6_28},
url = {https://mlanthology.org/eccv/2018/dai2018eccv-3dmv/}
}