Neural Point-Based Graphics

Abstract

We present a new point-based approach for modeling the appearance of real scenes. The approach uses a raw point cloud as the geometric representation of a scene, and augments each point with a learnable neural descriptor that encodes local geometry and appearance. A deep rendering network is learned in parallel with the descriptors, so that new views of the scene can be obtained by passing the rasterizations of a point cloud from new viewpoints through this network. The input rasterizations use the learned descriptors as point pseudo-colors. We show that the proposed approach can be used for modeling complex scenes and obtaining their photorealistic views, while avoiding explicit surface estimation and meshing. In particular, compelling results are obtained for scenes scanned using hand-held commodity RGB-D sensors as well as standard RGB cameras even in the presence of objects that are challenging for standard mesh-based modeling.

Cite

Text

Aliev et al. "Neural Point-Based Graphics." Proceedings of the European Conference on Computer Vision (ECCV), 2020. doi:10.1007/978-3-030-58542-6_42

Markdown

[Aliev et al. "Neural Point-Based Graphics." Proceedings of the European Conference on Computer Vision (ECCV), 2020.](https://mlanthology.org/eccv/2020/aliev2020eccv-neural/) doi:10.1007/978-3-030-58542-6_42

BibTeX

@inproceedings{aliev2020eccv-neural,
  title     = {{Neural Point-Based Graphics}},
  author    = {Aliev, Kara-Ali and Sevastopolsky, Artem and Kolos, Maria and Ulyanov, Dmitry and Lempitsky, Victor},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2020},
  doi       = {10.1007/978-3-030-58542-6_42},
  url       = {https://mlanthology.org/eccv/2020/aliev2020eccv-neural/}
}