FWD: Real-Time Novel View Synthesis with Forward Warping and Depth
Abstract
Novel view synthesis (NVS) is a challenging task requiring systems to generate photorealistic images of scenes from new viewpoints, where both quality and speed are important for applications. Previous image-based rendering (IBR) methods are fast, but have poor quality when input views are sparse. Recent Neural Radiance Fields (NeRF) and generalizable variants give impressive results but are not real-time. In our paper, we propose a generalizable NVS method with sparse inputs, called \FWDds, which gives high-quality synthesis in real-time. With explicit depth and differentiable rendering, it achieves competitive results to the SOTA methods with 130-1000xspeedup and better perceptual quality. If available, we can seamlessly integrate sensor depth during either training or inference to improve image quality while retaining real-time speed. With the growing prevalence of depths sensors, we hope that methods making use of depth will become increasingly useful.
Cite
Text
Cao et al. "FWD: Real-Time Novel View Synthesis with Forward Warping and Depth." Conference on Computer Vision and Pattern Recognition, 2022. doi:10.1109/CVPR52688.2022.01526Markdown
[Cao et al. "FWD: Real-Time Novel View Synthesis with Forward Warping and Depth." Conference on Computer Vision and Pattern Recognition, 2022.](https://mlanthology.org/cvpr/2022/cao2022cvpr-fwd/) doi:10.1109/CVPR52688.2022.01526BibTeX
@inproceedings{cao2022cvpr-fwd,
title = {{FWD: Real-Time Novel View Synthesis with Forward Warping and Depth}},
author = {Cao, Ang and Rockwell, Chris and Johnson, Justin},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2022},
pages = {15713-15724},
doi = {10.1109/CVPR52688.2022.01526},
url = {https://mlanthology.org/cvpr/2022/cao2022cvpr-fwd/}
}