Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes
Abstract
We present a method to perform novel view and time synthesis of dynamic scenes, requiring only a monocular video with known camera poses as input. To do this, we introduce Neural Scene Flow Fields, a new representation that models the dynamic scene as a time-variant continuous function of appearance, geometry, and 3D scene motion. Our representation is optimized through a neural network to fit the observed input views. We show that our representation can be used for complex dynamic scenes, including thin structures, view-dependent effects, and natural degrees of motion. We conduct a number of experiments that demonstrate our approach significantly outperforms recent monocular view synthesis methods, and show qualitative results of space-time view synthesis on a variety of real-world videos.
Cite
Text
Li et al. "Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes." Conference on Computer Vision and Pattern Recognition, 2021. doi:10.1109/CVPR46437.2021.00643Markdown
[Li et al. "Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes." Conference on Computer Vision and Pattern Recognition, 2021.](https://mlanthology.org/cvpr/2021/li2021cvpr-neural/) doi:10.1109/CVPR46437.2021.00643BibTeX
@inproceedings{li2021cvpr-neural,
title = {{Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes}},
author = {Li, Zhengqi and Niklaus, Simon and Snavely, Noah and Wang, Oliver},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2021},
pages = {6498-6508},
doi = {10.1109/CVPR46437.2021.00643},
url = {https://mlanthology.org/cvpr/2021/li2021cvpr-neural/}
}