Space-Time Neural Irradiance Fields for Free-Viewpoint Video

Abstract

We present a method that learns a spatiotemporal neural irradiance field for dynamic scenes from a single video. Our learned representation enables free-viewpoint rendering of the input video. Our method builds upon recent advances in implicit representations. Learning a spatiotemporal irradiance field from a single video poses significant challenges because the video contains only one observation of the scene at any point in time. The 3D geometry of a scene can be legitimately represented in numerous ways since varying geometry (motion) can be explained with varying appearance and vice versa. We address this ambiguity by constraining the time-varying geometry of our dynamic scene representation using the scene depth estimated from video depth estimation methods, aggregating contents from individual frames into a single global representation. We provide an extensive quantitative evaluation and demonstrate compelling free-viewpoint rendering results.

Cite

Text

Xian et al. "Space-Time Neural Irradiance Fields for Free-Viewpoint Video." Conference on Computer Vision and Pattern Recognition, 2021. doi:10.1109/CVPR46437.2021.00930

Markdown

[Xian et al. "Space-Time Neural Irradiance Fields for Free-Viewpoint Video." Conference on Computer Vision and Pattern Recognition, 2021.](https://mlanthology.org/cvpr/2021/xian2021cvpr-spacetime/) doi:10.1109/CVPR46437.2021.00930

BibTeX

@inproceedings{xian2021cvpr-spacetime,
  title     = {{Space-Time Neural Irradiance Fields for Free-Viewpoint Video}},
  author    = {Xian, Wenqi and Huang, Jia-Bin and Kopf, Johannes and Kim, Changil},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2021},
  pages     = {9421-9431},
  doi       = {10.1109/CVPR46437.2021.00930},
  url       = {https://mlanthology.org/cvpr/2021/xian2021cvpr-spacetime/}
}