When 3D Reconstruction Meets Ubiquitous RGB-D Images

Abstract

3D reconstruction from a single image is a classical problem in computer vision. However, it still poses great challenges for the reconstruction of daily-use objects with irregular shapes. In this paper, we propose to learn 3D reconstruction knowledge from informally captured RGB-D images, which will probably be ubiquitously used in daily life. The learning of 3D reconstruction is defined as a category modeling problem, in which a model for each category is trained to encode category-specific knowledge for 3D reconstruction. The category model estimates the pixel-level 3D structure of an object from its 2D appearance, by taking into account considerable variations in rotation, 3D structure, and texture. Learning 3D reconstruction from ubiquitous RGB-D images creates a new set of challenges. Experimental results have demonstrated the effectiveness of the proposed approach.

Cite

Text

Zhang et al. "When 3D Reconstruction Meets Ubiquitous RGB-D Images." Conference on Computer Vision and Pattern Recognition, 2014. doi:10.1109/CVPR.2014.95

Markdown

[Zhang et al. "When 3D Reconstruction Meets Ubiquitous RGB-D Images." Conference on Computer Vision and Pattern Recognition, 2014.](https://mlanthology.org/cvpr/2014/zhang2014cvpr-3d/) doi:10.1109/CVPR.2014.95

BibTeX

@inproceedings{zhang2014cvpr-3d,
  title     = {{When 3D Reconstruction Meets Ubiquitous RGB-D Images}},
  author    = {Zhang, Quanshi and Song, Xuan and Shao, Xiaowei and Zhao, Huijing and Shibasaki, Ryosuke},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2014},
  doi       = {10.1109/CVPR.2014.95},
  url       = {https://mlanthology.org/cvpr/2014/zhang2014cvpr-3d/}
}