DeGauss: Dynamic-Static Decomposition with Gaussian Splatting for Distractor-Free 3D Reconstruction
Abstract
Reconstructing clean, distractor-free 3D scenes from real-world captures remains a significant challenge, particularly in highly dynamic and cluttered settings such as egocentric videos. To tackle this problem, we introduce DeGauss, a simple and robust self-supervised framework for dynamic scene reconstruction based on a decoupled dynamic-static Gaussian Splatting design. DeGauss models dynamic elements with foreground Gaussians and static content with background Gaussians, using a probabilistic mask to coordinate their composition and enable independent yet complementary optimization. DeGauss generalizes robustly across a wide range of real-world scenarios, from casual image collections to long, dynamic egocentric videos, without relying on complex heuristics or extensive supervision. Experiments on benchmarks including NeRF-on-the-go, ADT, AEA, Hot3D, and EPIC-Fields demonstrate that DeGauss consistently outperforms existing methods, establishing a strong baseline for generalizable, distractor-free 3D reconstruction in highly dynamic, interaction-rich environments.
Cite
Text
Wang et al. "DeGauss: Dynamic-Static Decomposition with Gaussian Splatting for Distractor-Free 3D Reconstruction." International Conference on Computer Vision, 2025.Markdown
[Wang et al. "DeGauss: Dynamic-Static Decomposition with Gaussian Splatting for Distractor-Free 3D Reconstruction." International Conference on Computer Vision, 2025.](https://mlanthology.org/iccv/2025/wang2025iccv-degauss/)BibTeX
@inproceedings{wang2025iccv-degauss,
title = {{DeGauss: Dynamic-Static Decomposition with Gaussian Splatting for Distractor-Free 3D Reconstruction}},
author = {Wang, Rui and Lohmeyer, Quentin and Meboldt, Mirko and Tang, Siyu},
booktitle = {International Conference on Computer Vision},
year = {2025},
pages = {6294-6303},
url = {https://mlanthology.org/iccv/2025/wang2025iccv-degauss/}
}