SGS-SLAM: Semantic Gaussian Splatting for Neural Dense SLAM
Abstract
We present SGS-SLAM, the first semantic visual SLAM system based on Gaussian Splatting. It incorporates appearance, geometry, and semantic features through multi-channel optimization, addressing the oversmoothing limitations of neural implicit SLAM systems in high-quality rendering, scene understanding, and object-level geometry. We introduce a unique semantic feature loss that effectively compensates for the shortcomings of traditional depth and color losses in object optimization. Through a semantic-guided keyframe selection strategy, we prevent erroneous reconstructions caused by cumulative errors. Extensive experiments demonstrate that SGS-SLAM delivers state-of-the-art performance in camera pose estimation, map reconstruction, precise semantic segmentation, and object-level geometric accuracy, while ensuring real-time rendering capabilities.
Cite
Text
Li et al. "SGS-SLAM: Semantic Gaussian Splatting for Neural Dense SLAM." Proceedings of the European Conference on Computer Vision (ECCV), 2024. doi:10.1007/978-3-031-72751-1_10Markdown
[Li et al. "SGS-SLAM: Semantic Gaussian Splatting for Neural Dense SLAM." Proceedings of the European Conference on Computer Vision (ECCV), 2024.](https://mlanthology.org/eccv/2024/li2024eccv-sgsslam/) doi:10.1007/978-3-031-72751-1_10BibTeX
@inproceedings{li2024eccv-sgsslam,
title = {{SGS-SLAM: Semantic Gaussian Splatting for Neural Dense SLAM}},
author = {Li, Mingrui and Liu, Shuhong and Zhou, Heng and Zhu, Guohao and Cheng, Na and Deng, Tianchen and Wang, Hongyu},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2024},
doi = {10.1007/978-3-031-72751-1_10},
url = {https://mlanthology.org/eccv/2024/li2024eccv-sgsslam/}
}