3D Shape Generation and Completion Through Point-Voxel Diffusion
Abstract
We propose a novel approach for probabilistic generative modeling of 3D shapes. Unlike most existing models that learn to deterministically translate a latent vector to a shape, our model, Point-Voxel Diffusion (PVD), is a unified, probabilistic formulation for unconditional shape generation and conditional, multi-modal shape completion. PVDmarries denoising diffusion models with the hybrid, point-voxel representation of 3D shapes. It can be viewed as a series of denoising steps, reversing the diffusion process from observed point cloud data to Gaussian noise, and is trained by optimizing a variational lower bound to the (conditional) likelihood function. Experiments demonstrate that PVD is capable of synthesizing high-fidelity shapes, completing partial point clouds, and generating multiple completion results from single-view depth scans of real objects.
Cite
Text
Zhou et al. "3D Shape Generation and Completion Through Point-Voxel Diffusion." International Conference on Computer Vision, 2021. doi:10.1109/ICCV48922.2021.00577Markdown
[Zhou et al. "3D Shape Generation and Completion Through Point-Voxel Diffusion." International Conference on Computer Vision, 2021.](https://mlanthology.org/iccv/2021/zhou2021iccv-3d/) doi:10.1109/ICCV48922.2021.00577BibTeX
@inproceedings{zhou2021iccv-3d,
title = {{3D Shape Generation and Completion Through Point-Voxel Diffusion}},
author = {Zhou, Linqi and Du, Yilun and Wu, Jiajun},
booktitle = {International Conference on Computer Vision},
year = {2021},
pages = {5826-5835},
doi = {10.1109/ICCV48922.2021.00577},
url = {https://mlanthology.org/iccv/2021/zhou2021iccv-3d/}
}