3D Gaussian Inpainting with Depth-Guided Cross-View Consistency

Abstract

When performing 3D inpainting using novel-view rendering methods like Neural Radiance Field (NeRF) or 3D Gussian Splatting (3DGS), how to achieve texture and geometry consistency across camera views has been a challenge. In this paper, we propose a framework of 3D Gaussian Inpainting with Depth-Guided Cross-View Consistency (3DGIC) for cross-view consistent 3D inpainting. Guided by the rendered depth information from each training view, our 3DGIC exploits background pixels visible across different views for updating the inpainting mask, allowing us to refine the 3DGS for inpainting purposes. Through extensive experiments on benchmark datasets, we confirm that our 3DGIC outperforms current state-of-the-art 3D inpainting methods quantitatively and qualitatively.

Cite

Text

Huang et al. "3D Gaussian Inpainting with Depth-Guided Cross-View Consistency." Conference on Computer Vision and Pattern Recognition, 2025. doi:10.1109/CVPR52734.2025.02487

Markdown

[Huang et al. "3D Gaussian Inpainting with Depth-Guided Cross-View Consistency." Conference on Computer Vision and Pattern Recognition, 2025.](https://mlanthology.org/cvpr/2025/huang2025cvpr-3d/) doi:10.1109/CVPR52734.2025.02487

BibTeX

@inproceedings{huang2025cvpr-3d,
  title     = {{3D Gaussian Inpainting with Depth-Guided Cross-View Consistency}},
  author    = {Huang, Sheng-Yu and Chou, Zi-Ting and Wang, Yu-Chiang Frank},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2025},
  pages     = {26704-26713},
  doi       = {10.1109/CVPR52734.2025.02487},
  url       = {https://mlanthology.org/cvpr/2025/huang2025cvpr-3d/}
}