Learning to Drive Anywhere
Abstract
Human drivers can seamlessly adapt their driving decisions across geographical locations with diverse conditions and rules of the road, e.g., left vs. right-hand traffic. In contrast, existing models for autonomous driving have been thus far only deployed within restricted operational domains, i.e., without accounting for varying driving behaviors across locations or model scalability. In this work, we propose GeCo, a single geographically-aware conditional imitation learning (CIL) model that can efficiently learn from heterogeneous and globally distributed data with dynamic environmental, traffic, and social characteristics. Our key insight is to introduce a high-capacity, geo-location-based channel attention mechanism that effectively adapts to local nuances while also flexibly modeling similarities among regions in a data-driven manner. By optimizing a contrastive imitation objective, our proposed approach can efficiently scale across the inherently imbalanced data distributions and location-dependent events. We demonstrate the benefits of our GeCo agent across multiple datasets, cities, and scalable deployment paradigms, i.e., centralized, semi-supervised, and distributed agent training. Specifically, GeCo outperforms CIL baselines by over $14%$ in open-loop evaluation and $30%$ in closed-loop testing on CARLA.
Cite
Text
Zhu et al. "Learning to Drive Anywhere." Conference on Robot Learning, 2023.Markdown
[Zhu et al. "Learning to Drive Anywhere." Conference on Robot Learning, 2023.](https://mlanthology.org/corl/2023/zhu2023corl-learning-a/)BibTeX
@inproceedings{zhu2023corl-learning-a,
title = {{Learning to Drive Anywhere}},
author = {Zhu, Ruizhao and Huang, Peng and Ohn-Bar, Eshed and Saligrama, Venkatesh},
booktitle = {Conference on Robot Learning},
year = {2023},
pages = {3631-3653},
volume = {229},
url = {https://mlanthology.org/corl/2023/zhu2023corl-learning-a/}
}