3D-PSRNet: Part Segmented 3D Point Cloud Reconstruction from a Single Image

Abstract

We propose a mechanism to reconstruct part annotated 3D point clouds of objects given just a single input image. We demonstrate that jointly training for both reconstruction and segmentation leads to improved performance in both the tasks, when compared to training for each task individually. The key idea is to propagate information from each task so as to aid the other during the training procedure. Towards this end, we introduce a location-aware segmentation loss in the training regime. We empirically show the effectiveness of the proposed loss in generating more faithful part reconstructions while also improving segmentation accuracy. We thoroughly evaluate the proposed approach on different object categories from the ShapeNet dataset to obtain improved results in reconstruction as well as segmentation.

Cite

Text

Mandikal et al. "3D-PSRNet: Part Segmented 3D Point Cloud Reconstruction from a Single Image." European Conference on Computer Vision Workshops, 2018. doi:10.1007/978-3-030-11015-4_50

Markdown

[Mandikal et al. "3D-PSRNet: Part Segmented 3D Point Cloud Reconstruction from a Single Image." European Conference on Computer Vision Workshops, 2018.](https://mlanthology.org/eccvw/2018/mandikal2018eccvw-3dpsrnet/) doi:10.1007/978-3-030-11015-4_50

BibTeX

@inproceedings{mandikal2018eccvw-3dpsrnet,
  title     = {{3D-PSRNet: Part Segmented 3D Point Cloud Reconstruction from a Single Image}},
  author    = {Mandikal, Priyanka and Navaneet, K. L. and Babu, R. Venkatesh},
  booktitle = {European Conference on Computer Vision Workshops},
  year      = {2018},
  pages     = {662-674},
  doi       = {10.1007/978-3-030-11015-4_50},
  url       = {https://mlanthology.org/eccvw/2018/mandikal2018eccvw-3dpsrnet/}
}