Learning to Predict Scene-Level Implicit 3D from Posed RGBD Data
Abstract
We introduce a method that can learn to predict scene-level implicit functions for 3D reconstruction from posed RGBD data. At test time, our system maps a previously unseen RGB image to a 3D reconstruction of a scene via implicit functions. While implicit functions for 3D reconstruction have often been tied to meshes, we show that we can train one using only a set of posed RGBD images. This setting may help 3D reconstruction unlock the sea of accelerometer+RGBD data that is coming with new phones. Our system, D2-DRDF, can match and sometimes outperform current methods that use mesh supervision and shows better robustness to sparse data.
Cite
Text
Kulkarni et al. "Learning to Predict Scene-Level Implicit 3D from Posed RGBD Data." Conference on Computer Vision and Pattern Recognition, 2023. doi:10.1109/CVPR52729.2023.01655Markdown
[Kulkarni et al. "Learning to Predict Scene-Level Implicit 3D from Posed RGBD Data." Conference on Computer Vision and Pattern Recognition, 2023.](https://mlanthology.org/cvpr/2023/kulkarni2023cvpr-learning/) doi:10.1109/CVPR52729.2023.01655BibTeX
@inproceedings{kulkarni2023cvpr-learning,
title = {{Learning to Predict Scene-Level Implicit 3D from Posed RGBD Data}},
author = {Kulkarni, Nilesh and Jin, Linyi and Johnson, Justin and Fouhey, David F.},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2023},
pages = {17256-17265},
doi = {10.1109/CVPR52729.2023.01655},
url = {https://mlanthology.org/cvpr/2023/kulkarni2023cvpr-learning/}
}