NeuRAD: Neural Rendering for Autonomous Driving
Abstract
Neural radiance fields (NeRFs) have gained popularity in the autonomous driving (AD) community. Recent methods show NeRFs' potential for closed-loop simulation enabling testing of AD systems and as an advanced training data augmentation technique. However existing methods often require long training times dense semantic supervision or lack generalizability. This in turn hinders the application of NeRFs for AD at scale. In this paper we propose \modelname a robust novel view synthesis method tailored to dynamic AD data. Our method features simple network design extensive sensor modeling for both camera and lidar -- including rolling shutter beam divergence and ray dropping -- and is applicable to multiple datasets out of the box. We verify its performance on five popular AD datasets achieving state-of-the-art performance across the board. To encourage further development we openly release the NeuRAD source code at https://github.com/georghess/NeuRAD.
Cite
Text
Tonderski et al. "NeuRAD: Neural Rendering for Autonomous Driving." Conference on Computer Vision and Pattern Recognition, 2024. doi:10.1109/CVPR52733.2024.01411Markdown
[Tonderski et al. "NeuRAD: Neural Rendering for Autonomous Driving." Conference on Computer Vision and Pattern Recognition, 2024.](https://mlanthology.org/cvpr/2024/tonderski2024cvpr-neurad/) doi:10.1109/CVPR52733.2024.01411BibTeX
@inproceedings{tonderski2024cvpr-neurad,
title = {{NeuRAD: Neural Rendering for Autonomous Driving}},
author = {Tonderski, Adam and Lindström, Carl and Hess, Georg and Ljungbergh, William and Svensson, Lennart and Petersson, Christoffer},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2024},
pages = {14895-14904},
doi = {10.1109/CVPR52733.2024.01411},
url = {https://mlanthology.org/cvpr/2024/tonderski2024cvpr-neurad/}
}