3D-PRNN: Generating Shape Primitives with Recurrent Neural Networks

Abstract

The success of various applications including robotics, digital content creation, and visualization demand a structured and abstract representation of the 3D world from limited sensor data. Inspired by the nature of human perception of 3D shapes as a collection of simple parts, we explore such an abstract shape representation based on primitives. Given a single depth image of an object, we present 3D-PRNN, a generative recurrent neural network that synthesizes multiple plausible shapes composed of a set of primitives. Our generative model encodes symmetry characteristics of common man-made objects, preserves long-range structural coherence, and describes objects of varying complexity with a compact representation. We also propose a method based on Gaussian Fields to generate a large scale dataset of primitive-based shape representations to train our network. We evaluate our approach on a wide range of examples and show that it outperforms nearest-neighbor based shape retrieval methods and is on-par with voxel-based generative models while using a significantly reduced parameter space.

Cite

Text

Zou et al. "3D-PRNN: Generating Shape Primitives with Recurrent Neural Networks." International Conference on Computer Vision, 2017. doi:10.1109/ICCV.2017.103

Markdown

[Zou et al. "3D-PRNN: Generating Shape Primitives with Recurrent Neural Networks." International Conference on Computer Vision, 2017.](https://mlanthology.org/iccv/2017/zou2017iccv-3dprnn/) doi:10.1109/ICCV.2017.103

BibTeX

@inproceedings{zou2017iccv-3dprnn,
  title     = {{3D-PRNN: Generating Shape Primitives with Recurrent Neural Networks}},
  author    = {Zou, Chuhang and Yumer, Ersin and Yang, Jimei and Ceylan, Duygu and Hoiem, Derek},
  booktitle = {International Conference on Computer Vision},
  year      = {2017},
  doi       = {10.1109/ICCV.2017.103},
  url       = {https://mlanthology.org/iccv/2017/zou2017iccv-3dprnn/}
}