Generative Sparse-View Gaussian Splatting
Abstract
Novel view synthesis from limited observations remains a significant challenge due to the lack of information in under-sampled regions, often resulting in noticeable artifacts. We introduce Generative Sparse-view Gaussian Splatting (GS-GS), a general pipeline designed to enhance the rendering quality of 3D/4D Gaussian Splatting (GS) when training views are sparse. Our method generates unseen views using generative models, specifically leveraging pre-trained image diffusion models to iteratively refine view consistency and hallucinate additional images at pseudo views. This approach improves 3D/4D scene reconstruction by explicitly enforcing semantic correspondences during the generation of unseen views, thereby enhancing geometric consistency--unlike purely generative methods that often fail to maintain view consistency. Extensive evaluations on various 3D/4D datasets--including Blender, LLFF, Mip-NeRF360, and Neural 3D Video--demonstrate that our GS-GS outperforms existing state-of-the-art methods in rendering quality without sacrificing efficiency.
Cite
Text
Kong et al. "Generative Sparse-View Gaussian Splatting." Conference on Computer Vision and Pattern Recognition, 2025. doi:10.1109/CVPR52734.2025.02491Markdown
[Kong et al. "Generative Sparse-View Gaussian Splatting." Conference on Computer Vision and Pattern Recognition, 2025.](https://mlanthology.org/cvpr/2025/kong2025cvpr-generative/) doi:10.1109/CVPR52734.2025.02491BibTeX
@inproceedings{kong2025cvpr-generative,
title = {{Generative Sparse-View Gaussian Splatting}},
author = {Kong, Hanyang and Yang, Xingyi and Wang, Xinchao},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2025},
pages = {26745-26755},
doi = {10.1109/CVPR52734.2025.02491},
url = {https://mlanthology.org/cvpr/2025/kong2025cvpr-generative/}
}