Sparse2DGS: Geometry-Prioritized Gaussian Splatting for Surface Reconstruction from Sparse Views
Abstract
We present a Gaussian Splatting method for surface reconstruction using sparse input views. Previous methods relying on dense views struggle with extremely sparse Structure-from-Motion points for initialization. While learning-based Multi-view Stereo (MVS) provides dense 3D points, directly combining it with Gaussian Splatting leads to suboptimal results due to the ill-posed nature of sparse-view geometric optimization. We propose Sparse2DGS, an MVS-initialized Gaussian Splatting pipeline for complete and accurate reconstruction. Our key insight is to incorporate the geometric-prioritized enhancement schemes, allowing for direct and robust geometric learning under ill-posed conditions. Sparse2DGS outperforms existing methods by notable margins, with 1.13 Chamfer Distance error compared to 2DGS (2.81) on the DTU dataset using 3 views. Meanwhile, our method is 2x faster than NeRF-based fine-tuning approach. Code is available at https://github.com/Wuuu3511/Sparse2DGS.
Cite
Text
Wu et al. "Sparse2DGS: Geometry-Prioritized Gaussian Splatting for Surface Reconstruction from Sparse Views." Conference on Computer Vision and Pattern Recognition, 2025. doi:10.1109/CVPR52734.2025.01056Markdown
[Wu et al. "Sparse2DGS: Geometry-Prioritized Gaussian Splatting for Surface Reconstruction from Sparse Views." Conference on Computer Vision and Pattern Recognition, 2025.](https://mlanthology.org/cvpr/2025/wu2025cvpr-sparse2dgs/) doi:10.1109/CVPR52734.2025.01056BibTeX
@inproceedings{wu2025cvpr-sparse2dgs,
title = {{Sparse2DGS: Geometry-Prioritized Gaussian Splatting for Surface Reconstruction from Sparse Views}},
author = {Wu, Jiang and Li, Rui and Zhu, Yu and Guo, Rong and Sun, Jinqiu and Zhang, Yanning},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2025},
pages = {11307-11316},
doi = {10.1109/CVPR52734.2025.01056},
url = {https://mlanthology.org/cvpr/2025/wu2025cvpr-sparse2dgs/}
}