ELMGS: Enhancing Memory and Computation Scalability Through Compression for 3D Gaussian Splatting

Abstract

3D models have recently been popularized by the potentiality of end-to-end training offered first by Neural Radiance Fields and most recently by 3D Gaussian Splatting models. The latter has the big advantage of naturally providing fast training convergence and high editability. However as the research around these is still in its infancy there is still a gap in the literature regarding the model's scalability. In this work we propose an approach enabling both memory and computation scalability of such models. More specifically we propose an iterative pruning strategy that removes redundant information encoded in the model. We also enhance compressibility for the model by including a differentiable quantization and entropy coding estimator in the optimization strategy. Our results on popular benchmarks showcase the effectiveness of the proposed approach and open the road to the broad deployability of such a solution even on resource-constrained devices.

Cite

Text

Ali et al. "ELMGS: Enhancing Memory and Computation Scalability Through Compression for 3D Gaussian Splatting." Winter Conference on Applications of Computer Vision, 2025.

Markdown

[Ali et al. "ELMGS: Enhancing Memory and Computation Scalability Through Compression for 3D Gaussian Splatting." Winter Conference on Applications of Computer Vision, 2025.](https://mlanthology.org/wacv/2025/ali2025wacv-elmgs/)

BibTeX

@inproceedings{ali2025wacv-elmgs,
  title     = {{ELMGS: Enhancing Memory and Computation Scalability Through Compression for 3D Gaussian Splatting}},
  author    = {Ali, Muhammad Salman and Bae, Sung-Ho and Tartaglione, Enzo},
  booktitle = {Winter Conference on Applications of Computer Vision},
  year      = {2025},
  pages     = {2591-2600},
  url       = {https://mlanthology.org/wacv/2025/ali2025wacv-elmgs/}
}