DrivingGaussian: Composite Gaussian Splatting for Surrounding Dynamic Autonomous Driving Scenes

Abstract

We present DrivingGaussian an efficient and effective framework for surrounding dynamic autonomous driving scenes. For complex scenes with moving objects we first sequentially and progressively model the static background of the entire scene with incremental static 3D Gaussians. We then leverage a composite dynamic Gaussian graph to handle multiple moving objects individually reconstructing each object and restoring their accurate positions and occlusion relationships within the scene. We further use a LiDAR prior for Gaussian Splatting to reconstruct scenes with greater details and maintain panoramic consistency. DrivingGaussian outperforms existing methods in dynamic driving scene reconstruction and enables photorealistic surround-view synthesis with high-fidelity and multi-camera consistency. Our project page is at: https://github.com/VDIGPKU/DrivingGaussian.

Cite

Text

Zhou et al. "DrivingGaussian: Composite Gaussian Splatting for Surrounding Dynamic Autonomous Driving Scenes." Conference on Computer Vision and Pattern Recognition, 2024. doi:10.1109/CVPR52733.2024.02044

Markdown

[Zhou et al. "DrivingGaussian: Composite Gaussian Splatting for Surrounding Dynamic Autonomous Driving Scenes." Conference on Computer Vision and Pattern Recognition, 2024.](https://mlanthology.org/cvpr/2024/zhou2024cvpr-drivinggaussian/) doi:10.1109/CVPR52733.2024.02044

BibTeX

@inproceedings{zhou2024cvpr-drivinggaussian,
  title     = {{DrivingGaussian: Composite Gaussian Splatting for Surrounding Dynamic Autonomous Driving Scenes}},
  author    = {Zhou, Xiaoyu and Lin, Zhiwei and Shan, Xiaojun and Wang, Yongtao and Sun, Deqing and Yang, Ming-Hsuan},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2024},
  pages     = {21634-21643},
  doi       = {10.1109/CVPR52733.2024.02044},
  url       = {https://mlanthology.org/cvpr/2024/zhou2024cvpr-drivinggaussian/}
}