PFGS: High Fidelity Point Cloud Rendering via Feature Splatting

Abstract

Rendering high-fidelity images from sparse point clouds is still challenging. Existing learning-based approaches suffer from either hole artifacts, missing details, or expensive computations. In this paper, we propose a novel framework to render high-quality images from sparse points. This method first attempts to bridge the 3D Gaussian Splatting and point cloud rendering, which includes several cascaded modules. We first use a regressor to estimate Gaussian properties in a point-wise manner, the estimated properties are used to rasterize neural feature descriptors into 2D planes which are extracted from a multiscale extractor. The projected feature volume is gradually decoded toward the final prediction via a multiscale and progressive decoder. The whole pipeline experiences a two-stage training and is driven by our well-designed progressive and multiscale reconstruction loss. Experiments on different benchmarks show the superiority of our method in terms of rendering qualities and the necessities of our main components. 1 . 1 Project page: https://github.com/Mercerai/PFGS

Cite

Text

Wang et al. "PFGS: High Fidelity Point Cloud Rendering via Feature Splatting." Proceedings of the European Conference on Computer Vision (ECCV), 2024. doi:10.1007/978-3-031-73010-8_12

Markdown

[Wang et al. "PFGS: High Fidelity Point Cloud Rendering via Feature Splatting." Proceedings of the European Conference on Computer Vision (ECCV), 2024.](https://mlanthology.org/eccv/2024/wang2024eccv-pfgs/) doi:10.1007/978-3-031-73010-8_12

BibTeX

@inproceedings{wang2024eccv-pfgs,
  title     = {{PFGS: High Fidelity Point Cloud Rendering via Feature Splatting}},
  author    = {Wang, Jiaxu and Ziyi, Zhang and He, Junhao and Xu, Renjing},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2024},
  doi       = {10.1007/978-3-031-73010-8_12},
  url       = {https://mlanthology.org/eccv/2024/wang2024eccv-pfgs/}
}