SparseCraft: Few-Shot Neural Reconstruction Through Stereopsis Guided Geometric Linearization
Abstract
We present a novel approach for recovering 3D shape and view dependent appearance from a few colored images, enabling efficient 3D reconstruction and novel view synthesis. Our method learns an implicit neural representation in the form of a Signed Distance Function (SDF) and a radiance field. The model is trained progressively through ray marching enabled volumetric rendering, and regularized with learning-free multi-view stereo (MVS) cues. Key to our contribution is a novel implicit neural shape function learning strategy that encourages our SDF field to be as linear as possible near the level-set, hence robustifying the training against noise emanating from the supervision and regularization signals. Without using any pretrained priors, our method, called SparseCraft, achieves state-of-the-art performances both in novel-view synthesis and reconstruction from sparse views in standard benchmarks, while requiring less than 10 minutes for training. Project page: sparsecraft.github.io
Cite
Text
Younes et al. "SparseCraft: Few-Shot Neural Reconstruction Through Stereopsis Guided Geometric Linearization." Proceedings of the European Conference on Computer Vision (ECCV), 2024. doi:10.1007/978-3-031-72904-1_3Markdown
[Younes et al. "SparseCraft: Few-Shot Neural Reconstruction Through Stereopsis Guided Geometric Linearization." Proceedings of the European Conference on Computer Vision (ECCV), 2024.](https://mlanthology.org/eccv/2024/younes2024eccv-sparsecraft/) doi:10.1007/978-3-031-72904-1_3BibTeX
@inproceedings{younes2024eccv-sparsecraft,
title = {{SparseCraft: Few-Shot Neural Reconstruction Through Stereopsis Guided Geometric Linearization}},
author = {Younes, Mae and Ouasfi, Amine and Boukhayma, Adnane},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2024},
doi = {10.1007/978-3-031-72904-1_3},
url = {https://mlanthology.org/eccv/2024/younes2024eccv-sparsecraft/}
}