Per-Gaussian Embedding-Based Deformation for Deformable 3D Gaussian Splatting
Abstract
As 3D Gaussian Splatting (3DGS) provides fast and high-quality novel view synthesis, it is a natural extension to deform a canonical 3DGS to multiple frames for representing a dynamic scene. However, previous works fail to accurately reconstruct complex dynamic scenes. We attribute the failure to the design of the deformation field, which is built as a coordinate-based function. This approach is problematic because 3DGS is a mixture of multiple fields centered at the Gaussians, not just a single coordinate-based framework. To resolve this problem, we define the deformation as a function of per-Gaussian embeddings and temporal embeddings. Moreover, we decompose deformations as coarse and fine deformations to model slow and fast movements, respectively. Also, we introduce a local smoothness regularization for per-Gaussian embedding to improve the details in dynamic regions. Project page: https://jeongminb.github.io/e-d3dgs/
Cite
Text
Bae et al. "Per-Gaussian Embedding-Based Deformation for Deformable 3D Gaussian Splatting." Proceedings of the European Conference on Computer Vision (ECCV), 2024. doi:10.1007/978-3-031-72633-0_18Markdown
[Bae et al. "Per-Gaussian Embedding-Based Deformation for Deformable 3D Gaussian Splatting." Proceedings of the European Conference on Computer Vision (ECCV), 2024.](https://mlanthology.org/eccv/2024/bae2024eccv-pergaussian/) doi:10.1007/978-3-031-72633-0_18BibTeX
@inproceedings{bae2024eccv-pergaussian,
title = {{Per-Gaussian Embedding-Based Deformation for Deformable 3D Gaussian Splatting}},
author = {Bae, Jeongmin and Kim, Seoha and Yun, Youngsik and Lee, Hahyun and Bang, Gun and Uh, Youngjung},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2024},
doi = {10.1007/978-3-031-72633-0_18},
url = {https://mlanthology.org/eccv/2024/bae2024eccv-pergaussian/}
}