Representation-Agnostic Shape Fields

Abstract

3D shape analysis has been widely explored in the era of deep learning. Numerous models have been developed for various 3D data representation formats, e.g., MeshCNN for meshes, PointNet for point clouds and VoxNet for voxels. In this study, we present Representation-Agnostic Shape Fields (RASF), a generalizable and computation-efficient shape embedding module for 3D deep learning. RASF is implemented with a learnable 3D grid with multiple channels to store local geometry. Based on RASF, shape embeddings for various 3D shape representations (point clouds, meshes and voxels) are retrieved by coordinate indexing. While there are multiple ways to optimize the learnable parameters of RASF, we provide two effective schemes among all in this paper for RASF pre-training: shape reconstruction and normal estimation. Once trained, RASF becomes a plug-and-play performance booster with negligible cost. Extensive experiments on diverse 3D representation formats, networks and applications, validate the universal effectiveness of the proposed RASF. Code and pre-trained models are publicly available\footnote{\url{https://github.com/seanywang0408/RASF}}.

Cite

Text

Huang et al. "Representation-Agnostic Shape Fields." International Conference on Learning Representations, 2022.

Markdown

[Huang et al. "Representation-Agnostic Shape Fields." International Conference on Learning Representations, 2022.](https://mlanthology.org/iclr/2022/huang2022iclr-representationagnostic/)

BibTeX

@inproceedings{huang2022iclr-representationagnostic,
  title     = {{Representation-Agnostic Shape Fields}},
  author    = {Huang, Xiaoyang and Yang, Jiancheng and Wang, Yanjun and Chen, Ziyu and Li, Linguo and Li, Teng and Ni, Bingbing and Zhang, Wenjun},
  booktitle = {International Conference on Learning Representations},
  year      = {2022},
  url       = {https://mlanthology.org/iclr/2022/huang2022iclr-representationagnostic/}
}