Deep Octree-Based CNNs with Output-Guided Skip Connections for 3D Shape and Scene Completion

Abstract

Acquiring complete and clean 3D shape and scene data is challenging due to geometric occlusion and insufficient views during 3D capturing. We present a simple yet effective deep learning approach for completing the input noisy and incomplete shapes or scenes. Our network is built upon the octree-based CNNs (O-CNN) with U-Net like structures, which enjoys high computational and memory efficiency and supports to construct a very deep network structure for 3D CNNs. A novel output-guided skip-connection is introduced to the network structure for better preserving the input geometry and learning geometry prior from data effectively. We show that with these simple adaptions — output-guided skip-connection and deeper O-CNN (up to 70 layers), our network achieves state-of-the-art results in 3D shape completion and semantic scene computation.

Cite

Text

Wang et al. "Deep Octree-Based CNNs with Output-Guided Skip Connections for 3D Shape and Scene Completion." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020. doi:10.1109/CVPRW50498.2020.00141

Markdown

[Wang et al. "Deep Octree-Based CNNs with Output-Guided Skip Connections for 3D Shape and Scene Completion." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020.](https://mlanthology.org/cvprw/2020/wang2020cvprw-deep/) doi:10.1109/CVPRW50498.2020.00141

BibTeX

@inproceedings{wang2020cvprw-deep,
  title     = {{Deep Octree-Based CNNs with Output-Guided Skip Connections for 3D Shape and Scene Completion}},
  author    = {Wang, Peng-Shuai and Liu, Yang and Tong, Xin},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2020},
  pages     = {1074-1081},
  doi       = {10.1109/CVPRW50498.2020.00141},
  url       = {https://mlanthology.org/cvprw/2020/wang2020cvprw-deep/}
}