Learning to Drive via Asymmetric Self-Play

Abstract

Large-scale data is crucial for learning realistic and capable driving policies. However, it can be impractical to rely on scaling datasets with real data alone. The majority of driving data is uninteresting, and deliberately collecting new long-tail scenarios is expensive and unsafe. We propose asymmetric self-play to scale beyond real data with additional challenging, solvable, and realistic synthetic scenarios. Our approach pairs a teacher that learns to generate scenarios it can solve but the student cannot, with a student that learns to solve them. When applied to traffic simulation, we learn realistic policies with significantly fewer collisions in both nominal and long-tail scenarios. Our policies further zero-shot transfer to generate training data for end-to-end autonomy, significantly outperforming state-of-the-art adversarial approaches, or using real data alone. For more information, visit waabi.ai/selfplay.

Cite

Text

Zhang et al. "Learning to Drive via Asymmetric Self-Play." Proceedings of the European Conference on Computer Vision (ECCV), 2024. doi:10.1007/978-3-031-73033-7_9

Markdown

[Zhang et al. "Learning to Drive via Asymmetric Self-Play." Proceedings of the European Conference on Computer Vision (ECCV), 2024.](https://mlanthology.org/eccv/2024/zhang2024eccv-learning-e/) doi:10.1007/978-3-031-73033-7_9

BibTeX

@inproceedings{zhang2024eccv-learning-e,
  title     = {{Learning to Drive via Asymmetric Self-Play}},
  author    = {Zhang, Chris and Biswas, Sourav and Wong, Kelvin and Fallah, Kion and Zhang, Lunjun and Chen, Dian and Casas, Sergio and Urtasun, Raquel},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2024},
  doi       = {10.1007/978-3-031-73033-7_9},
  url       = {https://mlanthology.org/eccv/2024/zhang2024eccv-learning-e/}
}