Deep Cascade Generation on Point Sets
Abstract
This paper proposes a deep cascade network to generate 3D geometry of an object on a point cloud, consisting of a set of permutation-insensitive points. Such a surface representation is easy to learn from, but inhibits exploiting rich low-dimensional topological manifolds of the object shape due to lack of geometric connectivity. For benefiting from its simple structure yet utilizing rich neighborhood information across points, this paper proposes a two-stage cascade model on point sets. Specifically, our method adopts the state-of-the-art point set autoencoder to generate a sparsely coarse shape first, and then locally refines it by encoding neighborhood connectivity on a graph representation. An ensemble of sparse refined surface is designed to alleviate the suffering from local minima caused by modeling complex geometric manifolds. Moreover, our model develops a dynamically-weighted loss function for jointly penalizing the generation output of cascade levels at different training stages in a coarse-to-fine manner. Comparative evaluation on the publicly benchmarking ShapeNet dataset demonstrates superior performance of the proposed model to the state-of-the-art methods on both single-view shape reconstruction and shape autoencoding applications.
Cite
Text
Wang et al. "Deep Cascade Generation on Point Sets." International Joint Conference on Artificial Intelligence, 2019. doi:10.24963/IJCAI.2019/517Markdown
[Wang et al. "Deep Cascade Generation on Point Sets." International Joint Conference on Artificial Intelligence, 2019.](https://mlanthology.org/ijcai/2019/wang2019ijcai-deep/) doi:10.24963/IJCAI.2019/517BibTeX
@inproceedings{wang2019ijcai-deep,
title = {{Deep Cascade Generation on Point Sets}},
author = {Wang, Kaiqi and Chen, Ke and Jia, Kui},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2019},
pages = {3726-3732},
doi = {10.24963/IJCAI.2019/517},
url = {https://mlanthology.org/ijcai/2019/wang2019ijcai-deep/}
}