Generalized Predictive Model for Autonomous Driving

Abstract

In this paper we introduce the first large-scale video prediction model in the autonomous driving discipline. To eliminate the restriction of high-cost data collection and empower the generalization ability of our model we acquire massive data from the web and pair it with diverse and high-quality text descriptions. The resultant dataset accumulates over 2000 hours of driving videos spanning areas all over the world with diverse weather conditions and traffic scenarios. Inheriting the merits from recent latent diffusion models our model dubbed GenAD handles the challenging dynamics in driving scenes with novel temporal reasoning blocks. We showcase that it can generalize to various unseen driving datasets in a zero-shot manner surpassing general or driving-specific video prediction counterparts. Furthermore GenAD can be adapted into an action-conditioned prediction model or a motion planner holding great potential for real-world driving applications.

Cite

Text

Yang et al. "Generalized Predictive Model for Autonomous Driving." Conference on Computer Vision and Pattern Recognition, 2024. doi:10.1109/CVPR52733.2024.01389

Markdown

[Yang et al. "Generalized Predictive Model for Autonomous Driving." Conference on Computer Vision and Pattern Recognition, 2024.](https://mlanthology.org/cvpr/2024/yang2024cvpr-generalized/) doi:10.1109/CVPR52733.2024.01389

BibTeX

@inproceedings{yang2024cvpr-generalized,
  title     = {{Generalized Predictive Model for Autonomous Driving}},
  author    = {Yang, Jiazhi and Gao, Shenyuan and Qiu, Yihang and Chen, Li and Li, Tianyu and Dai, Bo and Chitta, Kashyap and Wu, Penghao and Zeng, Jia and Luo, Ping and Zhang, Jun and Geiger, Andreas and Qiao, Yu and Li, Hongyang},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2024},
  pages     = {14662-14672},
  doi       = {10.1109/CVPR52733.2024.01389},
  url       = {https://mlanthology.org/cvpr/2024/yang2024cvpr-generalized/}
}