Two Stream 3D Semantic Scene Completion

Abstract

Inferring the 3D geometry and the semantic meaning of surfaces, which are occluded, is a very challenging task. Recently, a first end-to-end learning approach has been proposed that completes a scene from a single depth image. The approach voxelizes the scene and predicts for each voxel if it is occupied and, if it is occupied, the semantic class label. In this work, we propose a two stream approach that leverages depth information and semantic information, which is inferred from the RGB image, for this task. The approach constructs an incomplete 3D semantic tensor, which uses a compact three-channel encoding for the inferred semantic information, and uses a 3D CNN to infer the complete 3D semantic tensor. In our experimental evaluation, we show that the proposed two stream approach substantially outperforms the state-of-the-art for semantic scene completion.

Cite

Text

Garbade et al. "Two Stream 3D Semantic Scene Completion." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2019. doi:10.1109/CVPRW.2019.00055

Markdown

[Garbade et al. "Two Stream 3D Semantic Scene Completion." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2019.](https://mlanthology.org/cvprw/2019/garbade2019cvprw-two/) doi:10.1109/CVPRW.2019.00055

BibTeX

@inproceedings{garbade2019cvprw-two,
  title     = {{Two Stream 3D Semantic Scene Completion}},
  author    = {Garbade, Martin and Chen, Yueh-Tung and Sawatzky, Johann and Gall, Juergen},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2019},
  pages     = {416-425},
  doi       = {10.1109/CVPRW.2019.00055},
  url       = {https://mlanthology.org/cvprw/2019/garbade2019cvprw-two/}
}