AutoOcc: Automatic Open-Ended Semantic Occupancy Annotation via Vision-Language Guided Gaussian Splatting
Abstract
Obtaining high-quality 3D semantic occupancy from raw sensor data remains an essential yet challenging task, often requiring extensive manual labeling. In this work, we propose AutoOcc, a vision-centric automated pipeline for open-ended semantic occupancy annotation that integrates differentiable Gaussian splatting guided by vision-language models. We formulate the open-ended semantic 3D occupancy reconstruction task to automatically generate scene occupancy by combining attention maps from vision-language models and foundation vision models. We devise semantic-aware Gaussians as intermediate geometric descriptors and propose a cumulative Gaussian-to-voxel splatting algorithm that enables effective and efficient occupancy annotation. Our framework outperforms existing automated occupancy annotation methods without human labels. AutoOcc also enables open-ended semantic occupancy auto-labeling, achieving robust performance in both static and dynamically complex scenarios.
Cite
Text
Zhou et al. "AutoOcc: Automatic Open-Ended Semantic Occupancy Annotation via Vision-Language Guided Gaussian Splatting." International Conference on Computer Vision, 2025.Markdown
[Zhou et al. "AutoOcc: Automatic Open-Ended Semantic Occupancy Annotation via Vision-Language Guided Gaussian Splatting." International Conference on Computer Vision, 2025.](https://mlanthology.org/iccv/2025/zhou2025iccv-autoocc/)BibTeX
@inproceedings{zhou2025iccv-autoocc,
title = {{AutoOcc: Automatic Open-Ended Semantic Occupancy Annotation via Vision-Language Guided Gaussian Splatting}},
author = {Zhou, Xiaoyu and Wang, Jingqi and Wang, Yongtao and Wei, Yufei and Dong, Nan and Yang, Ming-Hsuan},
booktitle = {International Conference on Computer Vision},
year = {2025},
pages = {3367-3377},
url = {https://mlanthology.org/iccv/2025/zhou2025iccv-autoocc/}
}