Single View Point Cloud Generation via Unified 3D Prototype
Abstract
As 3D point clouds become the representation of choice for multiple vision and graphics applications, such as autonomous driving, robotics, etc., the generation of them by deep neural networks has attracted increasing attention in the research community. Despite the recent success of deep learning models in classification and segmentation, synthesizing point clouds remains challenging, especially from a single image. State-of-the-art (SOTA) approaches can generate a point cloud from a hidden vector, however, they treat 2D and 3D features equally and disregard the rich shape information within the 3D data. In this paper, we address this problem by integrating image features with 3D prototype features. Specifically, we propose to learn a set of 3D prototype features from a real point cloud dataset and dynamically adjust them through the training. These prototypes are then integrated with incoming image features to guide the point cloud generation process. Experimental results show that our proposed method outperforms SOTA methods on single image based 3D reconstruction tasks.
Cite
Text
Lin et al. "Single View Point Cloud Generation via Unified 3D Prototype." AAAI Conference on Artificial Intelligence, 2021. doi:10.1609/AAAI.V35I3.16303Markdown
[Lin et al. "Single View Point Cloud Generation via Unified 3D Prototype." AAAI Conference on Artificial Intelligence, 2021.](https://mlanthology.org/aaai/2021/lin2021aaai-single/) doi:10.1609/AAAI.V35I3.16303BibTeX
@inproceedings{lin2021aaai-single,
title = {{Single View Point Cloud Generation via Unified 3D Prototype}},
author = {Lin, Yu and Wang, Yigong and Li, Yi-Fan and Wang, Zhuoyi and Gao, Yang and Khan, Latifur},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2021},
pages = {2064-2072},
doi = {10.1609/AAAI.V35I3.16303},
url = {https://mlanthology.org/aaai/2021/lin2021aaai-single/}
}