Neural Radiance Flow for 4D View Synthesis and Video Processing
Abstract
We present a method, Neural Radiance Flow (NeRFlow), to learn a 4D spatial-temporal representation of a dynamic scene from a set of RGB images. Key to our approach is the use of a neural implicit representation that learns to capture the 3D occupancy, radiance, and dynamics of the scene. By enforcing consistency across different modalities, our representation enables multi-view rendering in diverse dynamic scenes, including water pouring, robotic interaction, and real images, outperforming state-of-the-art methods for spatial-temporal view synthesis. Our approach works even when being provided only a single monocular real video. We further demonstrate that the learned representation can serve as an implicit scene prior, enabling video processing tasks such as image super-resolution and de-noising without any additional supervision.
Cite
Text
Du et al. "Neural Radiance Flow for 4D View Synthesis and Video Processing." International Conference on Computer Vision, 2021. doi:10.1109/ICCV48922.2021.01406Markdown
[Du et al. "Neural Radiance Flow for 4D View Synthesis and Video Processing." International Conference on Computer Vision, 2021.](https://mlanthology.org/iccv/2021/du2021iccv-neural/) doi:10.1109/ICCV48922.2021.01406BibTeX
@inproceedings{du2021iccv-neural,
title = {{Neural Radiance Flow for 4D View Synthesis and Video Processing}},
author = {Du, Yilun and Zhang, Yinan and Yu, Hong-Xing and Tenenbaum, Joshua B. and Wu, Jiajun},
booktitle = {International Conference on Computer Vision},
year = {2021},
pages = {14324-14334},
doi = {10.1109/ICCV48922.2021.01406},
url = {https://mlanthology.org/iccv/2021/du2021iccv-neural/}
}