LEADER: Learning Attention over Driving Behaviors for Planning Under Uncertainty
Abstract
Uncertainty in human behaviors poses a significant challenge to autonomous driving in crowded urban environments. The partially observable Markov decision process (POMDP) offers a principled general framework for decision making under uncertainty and achieves real-time performance for complex tasks by leveraging Monte Carlo sampling. However, sampling may miss rare, but critical events, leading to potential safety concerns. To tackle this challenge, we propose a new algorithm, LEarning Attention over Driving bEhavioRs (LEADER), which learns to attend to critical human behaviors during planning. LEADER learns a neural network generator to provide attention over human behaviors; it integrates the attention into a belief-space planner through importance sampling, which biases planning towards critical events. To train the attention generator, we form a minimax game between the generator and the planner. By solving this minimax game, LEADER learns to perform risk-aware planning without explicit human effort on data labeling.
Cite
Text
Danesh et al. "LEADER: Learning Attention over Driving Behaviors for Planning Under Uncertainty." Conference on Robot Learning, 2022.Markdown
[Danesh et al. "LEADER: Learning Attention over Driving Behaviors for Planning Under Uncertainty." Conference on Robot Learning, 2022.](https://mlanthology.org/corl/2022/danesh2022corl-leader/)BibTeX
@inproceedings{danesh2022corl-leader,
title = {{LEADER: Learning Attention over Driving Behaviors for Planning Under Uncertainty}},
author = {Danesh, Mohamad Hosein and Cai, Panpan and Hsu, David},
booktitle = {Conference on Robot Learning},
year = {2022},
pages = {199-211},
volume = {205},
url = {https://mlanthology.org/corl/2022/danesh2022corl-leader/}
}