Robust Reinforcement Learning for Autonomous Driving

Abstract

Autonomous driving is still considered as an “unsolved problem” given its inherent important variability and that many processes associated with its development like vehicle control and scenes recognition remain open issues. Despite reinforcement learning algorithms have achieved notable results in games and some robotic manipulations, this technique has not been widely scaled up to the more challenging real world applications like autonomous driving. In this work, we propose a deep reinforcement learning (RL) algorithm embedding an actor critic architecture with multi-step returns to achieve a better robustness of the agent learning strategies when acting in complex and unstable environments. The experiment is conducted with Carla simulator offering a customizable and realistic urban driving conditions. The developed deep actor RL guided by a policy-evaluator critic distinctly surpasses the performance of a standard deep RL agent.

Cite

Text

Jaafra et al. "Robust Reinforcement Learning for Autonomous Driving." ICLR 2019 Workshops: drlStructPred, 2019.

Markdown

[Jaafra et al. "Robust Reinforcement Learning for Autonomous Driving." ICLR 2019 Workshops: drlStructPred, 2019.](https://mlanthology.org/iclrw/2019/jaafra2019iclrw-robust/)

BibTeX

@inproceedings{jaafra2019iclrw-robust,
  title     = {{Robust Reinforcement Learning for Autonomous Driving}},
  author    = {Jaafra, Yesmina and Laurent, Jean Luc and Deruyver, Aline and Naceur, Mohamed Saber},
  booktitle = {ICLR 2019 Workshops: drlStructPred},
  year      = {2019},
  url       = {https://mlanthology.org/iclrw/2019/jaafra2019iclrw-robust/}
}