MVSplat: Efficient 3D Gaussian Splatting from Sparse Multi-View Images
Abstract
We introduce , an efficient model that, given sparse multi-view images as input, predicts clean feed-forward 3D Gaussians. To accurately localize the Gaussian centers, we build a cost volume representation via plane sweeping, where the cross-view feature similarities stored in the cost volume can provide valuable geometry cues to the estimation of depth. We also learn other Gaussian primitives’ parameters jointly with the Gaussian centers while only relying on photometric supervision. We demonstrate the importance of the cost volume representation in learning feed-forward Gaussians via extensive experimental evaluations. On the large-scale RealEstate10K and ACID benchmarks, achieves state-of-the-art performance with the fastest feed-forward inference speed (22 fps). More impressively, compared to the latest state-of-the-art method pixelSplat, uses 10× fewer parameters and infers more than 2× faster while providing higher appearance and geometry quality as well as better cross-dataset generalization.
Cite
Text
Chen et al. "MVSplat: Efficient 3D Gaussian Splatting from Sparse Multi-View Images." Proceedings of the European Conference on Computer Vision (ECCV), 2024. doi:10.1007/978-3-031-72664-4_21Markdown
[Chen et al. "MVSplat: Efficient 3D Gaussian Splatting from Sparse Multi-View Images." Proceedings of the European Conference on Computer Vision (ECCV), 2024.](https://mlanthology.org/eccv/2024/chen2024eccv-mvsplat/) doi:10.1007/978-3-031-72664-4_21BibTeX
@inproceedings{chen2024eccv-mvsplat,
title = {{MVSplat: Efficient 3D Gaussian Splatting from Sparse Multi-View Images}},
author = {Chen, Yuedong and Xu, Haofei and Zheng, Chuanxia and Zhuang, Bohan and Pollefeys, Marc and Geiger, Andreas and Cham, Tat-Jen and Cai, Jianfei},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2024},
doi = {10.1007/978-3-031-72664-4_21},
url = {https://mlanthology.org/eccv/2024/chen2024eccv-mvsplat/}
}