3D Gaussian Flats: Hybrid 2D/3D Photometric Scene Reconstruction
Abstract
Recent advances in radiance fields and novel view synthesis enable creation of realistic digital twins from photographs. However, current methods struggle with flat, texture-less surfaces, creating uneven and semi-transparent reconstructions, due to an ill-conditioned photometric reconstruction objective. Surface reconstruction methods solve this issue but sacrifice visual quality. We propose a novel hybrid 2D/3D representation that jointly optimizes constrained planar (2D) Gaussians for modeling flat surfaces and freeform (3D) Gaussians for the rest of the scene. Our end-to-end approach dynamically detects and refines planar regions, improving both visual fidelity and geometric accuracy. It achieves state-of-the-art depth estimation on ScanNet++ and ScanNetv2, and excels at mesh extraction without overfitting to a specific camera model, showing its effectiveness in producing high-quality reconstruction of indoor scenes.
Cite
Text
Taktasheva et al. "3D Gaussian Flats: Hybrid 2D/3D Photometric Scene Reconstruction." Advances in Neural Information Processing Systems, 2025.Markdown
[Taktasheva et al. "3D Gaussian Flats: Hybrid 2D/3D Photometric Scene Reconstruction." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/taktasheva2025neurips-3d/)BibTeX
@inproceedings{taktasheva2025neurips-3d,
title = {{3D Gaussian Flats: Hybrid 2D/3D Photometric Scene Reconstruction}},
author = {Taktasheva, Maria and Goli, Lily and Fiorini, Alessandro and Li, Zhen and Rebain, Daniel and Tagliasacchi, Andrea},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/taktasheva2025neurips-3d/}
}