Point-NeRF: Point-Based Neural Radiance Fields
Abstract
Volumetric neural rendering methods like NeRF generate high-quality view synthesis results but are optimized per-scene leading to prohibitive reconstruction time. On the other hand, deep multi-view stereo methods can quickly reconstruct scene geometry via direct network inference. Point-NeRF combines the advantages of these two approaches by using neural 3D point clouds, with associated neural features, to model a radiance field. Point-NeRF can be rendered efficiently by aggregating neural point features near scene surfaces, in a ray marching-based rendering pipeline. Moreover, Point-NeRF can be initialized via direct inference of a pre-trained deep network to produce a neural point cloud; this point cloud can be fine-tuned to surpass the visual quality of NeRF with 30X faster training time. Point-NeRF can be combined with other 3D reconstruction methods and handles the errors and outliers in such methods via a novel pruning and growing mechanism.
Cite
Text
Xu et al. "Point-NeRF: Point-Based Neural Radiance Fields." Conference on Computer Vision and Pattern Recognition, 2022. doi:10.1109/CVPR52688.2022.00536Markdown
[Xu et al. "Point-NeRF: Point-Based Neural Radiance Fields." Conference on Computer Vision and Pattern Recognition, 2022.](https://mlanthology.org/cvpr/2022/xu2022cvpr-pointnerf/) doi:10.1109/CVPR52688.2022.00536BibTeX
@inproceedings{xu2022cvpr-pointnerf,
title = {{Point-NeRF: Point-Based Neural Radiance Fields}},
author = {Xu, Qiangeng and Xu, Zexiang and Philip, Julien and Bi, Sai and Shu, Zhixin and Sunkavalli, Kalyan and Neumann, Ulrich},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2022},
pages = {5438-5448},
doi = {10.1109/CVPR52688.2022.00536},
url = {https://mlanthology.org/cvpr/2022/xu2022cvpr-pointnerf/}
}