Perspective Transformer Nets: Learning Single-View 3D Object Reconstruction Without 3D Supervision

Abstract

Understanding the 3D world is a fundamental problem in computer vision. However, learning a good representation of 3D objects is still an open problem due to the high dimensionality of the data and many factors of variation involved. In this work, we investigate the task of single-view 3D object reconstruction from a learning agent's perspective. We formulate the learning process as an interaction between 3D and 2D representations and propose an encoder-decoder network with a novel projection loss defined by the projective transformation. More importantly, the projection loss enables the unsupervised learning using 2D observation without explicit 3D supervision. We demonstrate the ability of the model in generating 3D volume from a single 2D image with three sets of experiments: (1) learning from single-class objects; (2) learning from multi-class objects and (3) testing on novel object classes. Results show superior performance and better generalization ability for 3D object reconstruction when the projection loss is involved.

Cite

Text

Yan et al. "Perspective Transformer Nets: Learning Single-View 3D Object Reconstruction Without 3D Supervision." Neural Information Processing Systems, 2016.

Markdown

[Yan et al. "Perspective Transformer Nets: Learning Single-View 3D Object Reconstruction Without 3D Supervision." Neural Information Processing Systems, 2016.](https://mlanthology.org/neurips/2016/yan2016neurips-perspective/)

BibTeX

@inproceedings{yan2016neurips-perspective,
  title     = {{Perspective Transformer Nets: Learning Single-View 3D Object Reconstruction Without 3D Supervision}},
  author    = {Yan, Xinchen and Yang, Jimei and Yumer, Ersin and Guo, Yijie and Lee, Honglak},
  booktitle = {Neural Information Processing Systems},
  year      = {2016},
  pages     = {1696-1704},
  url       = {https://mlanthology.org/neurips/2016/yan2016neurips-perspective/}
}