Implicit Geometric Regularization for Learning Shapes
Abstract
Representing shapes as level-sets of neural networks has been recently proved to be useful for different shape analysis and reconstruction tasks. So far, such representations were computed using either: (i) pre-computed implicit shape representations; or (ii) loss functions explicitly defined over the neural level-sets. In this paper we offer a new paradigm for computing high fidelity implicit neural representations directly from raw data (i.e., point clouds, with or without normal information). We observe that a rather simple loss function, encouraging the neural network to vanish on the input point cloud and to have a unit norm gradient, possesses an implicit geometric regularization property that favors smooth and natural zero level-set surfaces, avoiding bad zero-loss solutions. We provide a theoretical analysis of this property for the linear case, and show that, in practice, our method leads to state-of-the-art implicit neural representations with higher level-of-details and fidelity compared to previous methods.
Cite
Text
Gropp et al. "Implicit Geometric Regularization for Learning Shapes." International Conference on Machine Learning, 2020.Markdown
[Gropp et al. "Implicit Geometric Regularization for Learning Shapes." International Conference on Machine Learning, 2020.](https://mlanthology.org/icml/2020/gropp2020icml-implicit/)BibTeX
@inproceedings{gropp2020icml-implicit,
title = {{Implicit Geometric Regularization for Learning Shapes}},
author = {Gropp, Amos and Yariv, Lior and Haim, Niv and Atzmon, Matan and Lipman, Yaron},
booktitle = {International Conference on Machine Learning},
year = {2020},
pages = {3789-3799},
volume = {119},
url = {https://mlanthology.org/icml/2020/gropp2020icml-implicit/}
}