READ: Large-Scale Neural Scene Rendering for Autonomous Driving
Abstract
With the development of advanced driver assistance systems~(ADAS) and autonomous vehicles, conducting experiments in various scenarios becomes an urgent need. Although having been capable of synthesizing photo-realistic street scenes, conventional image-to-image translation methods cannot produce coherent scenes due to the lack of 3D information. In this paper, a large-scale neural rendering method is proposed to synthesize the autonomous driving scene~(READ), which makes it possible to generate large-scale driving scenes in real time on a PC through a variety of sampling schemes. In order to effectively represent driving scenarios, we propose an ω-net rendering network to learn neural descriptors from sparse point clouds. Our model can not only synthesize photo-realistic driving scenes but also stitch and edit them. The promising experimental results show that our model performs well in large-scale driving scenarios.
Cite
Text
Li et al. "READ: Large-Scale Neural Scene Rendering for Autonomous Driving." AAAI Conference on Artificial Intelligence, 2023. doi:10.1609/AAAI.V37I2.25238Markdown
[Li et al. "READ: Large-Scale Neural Scene Rendering for Autonomous Driving." AAAI Conference on Artificial Intelligence, 2023.](https://mlanthology.org/aaai/2023/li2023aaai-read/) doi:10.1609/AAAI.V37I2.25238BibTeX
@inproceedings{li2023aaai-read,
title = {{READ: Large-Scale Neural Scene Rendering for Autonomous Driving}},
author = {Li, Zhuopeng and Li, Lu and Zhu, Jianke},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2023},
pages = {1522-1529},
doi = {10.1609/AAAI.V37I2.25238},
url = {https://mlanthology.org/aaai/2023/li2023aaai-read/}
}