Real-Time Neural Rasterization for Large Scenes
Abstract
We propose a new method for realistic real-time novel-view synthesis (NVS) of large scenes. Existing fast neural rendering methods generate realistic results, but primarily work for small scale scenes (<50 square meter) and have difficulty at large scale (>10000 square meter). Traditional graphics-based rasterization rendering is fast for large scenes but lacks realism and requires expensive manually created assets. Our approach combines the best of both worlds by taking a moderate-quality scaffold mesh as input and learning a neural texture field and shader to model view-dependant effects to enhance realism, while still using the standard graphics pipeline for real-time rendering. Our method outperforms existing neural rendering methods, providing at least 30x faster rendering with comparable or better realism for large self-driving and drone scenes. Our work is the first to enable real-time visualization of large real-world scenes.
Cite
Text
Liu et al. "Real-Time Neural Rasterization for Large Scenes." International Conference on Computer Vision, 2023. doi:10.1109/ICCV51070.2023.00773Markdown
[Liu et al. "Real-Time Neural Rasterization for Large Scenes." International Conference on Computer Vision, 2023.](https://mlanthology.org/iccv/2023/liu2023iccv-realtime/) doi:10.1109/ICCV51070.2023.00773BibTeX
@inproceedings{liu2023iccv-realtime,
title = {{Real-Time Neural Rasterization for Large Scenes}},
author = {Liu, Jeffrey Yunfan and Chen, Yun and Yang, Ze and Wang, Jingkang and Manivasagam, Sivabalan and Urtasun, Raquel},
booktitle = {International Conference on Computer Vision},
year = {2023},
pages = {8416-8427},
doi = {10.1109/ICCV51070.2023.00773},
url = {https://mlanthology.org/iccv/2023/liu2023iccv-realtime/}
}