MorpheuS: Neural Dynamic 360deg Surface Reconstruction from Monocular RGB-D Video

Abstract

Neural rendering has demonstrated remarkable success in dynamic scene reconstruction. Thanks to the expressiveness of neural representations prior works can accurately capture the motion and achieve high-fidelity reconstruction of the target object. Despite this real-world video scenarios often feature large unobserved regions where neural representations struggle to achieve realistic completion. To tackle this challenge we introduce MorpheuS a framework for dynamic 360deg surface reconstruction from a casually captured RGB-D video. Our approach models the target scene as a canonical field that encodes its geometry and appearance in conjunction with a deformation field that warps points from the current frame to the canonical space. We leverage a view-dependent diffusion prior and distill knowledge from it to achieve realistic completion of unobserved regions. Experimental results on various real-world and synthetic datasets show that our method can achieve high-fidelity 360deg surface reconstruction of a deformable object from a monocular RGB-D video.

Cite

Text

Wang et al. "MorpheuS: Neural Dynamic 360deg Surface Reconstruction from Monocular RGB-D Video." Conference on Computer Vision and Pattern Recognition, 2024.

Markdown

[Wang et al. "MorpheuS: Neural Dynamic 360deg Surface Reconstruction from Monocular RGB-D Video." Conference on Computer Vision and Pattern Recognition, 2024.](https://mlanthology.org/cvpr/2024/wang2024cvpr-morpheus/)

BibTeX

@inproceedings{wang2024cvpr-morpheus,
  title     = {{MorpheuS: Neural Dynamic 360deg Surface Reconstruction from Monocular RGB-D Video}},
  author    = {Wang, Hengyi and Wang, Jingwen and Agapito, Lourdes},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2024},
  pages     = {20965-20976},
  url       = {https://mlanthology.org/cvpr/2024/wang2024cvpr-morpheus/}
}