Hierarchical Reinforcement Learning and Model Predictive Control for Strategic Motion Planning in Autonomous Racing
Abstract
We present an approach for safe trajectory planning, where a strategic task related to autonomous racing is learned sample efficiently within a simulation environment. A high-level policy, represented as a neural network, outputs a reward specification that is used within the objective of a parametric nonlinear model predictive controller. We can guarantee safe and feasible trajectories by including constraints and vehicle kinematics in the nonlinear program. Compared to classical reinforcement learning, our approach restricts exploration to safe trajectories, starts with good prior performance, and yields complete trajectories that can be passed to a tracking lowest-level controller. We validate the performance of our algorithm in simulation and show how it learns to overtake and block other vehicles efficiently.
Cite
Text
Reiter et al. "Hierarchical Reinforcement Learning and Model Predictive Control for Strategic Motion Planning in Autonomous Racing." ICML 2024 Workshops: RLControlTheory, 2024.Markdown
[Reiter et al. "Hierarchical Reinforcement Learning and Model Predictive Control for Strategic Motion Planning in Autonomous Racing." ICML 2024 Workshops: RLControlTheory, 2024.](https://mlanthology.org/icmlw/2024/reiter2024icmlw-hierarchical/)BibTeX
@inproceedings{reiter2024icmlw-hierarchical,
title = {{Hierarchical Reinforcement Learning and Model Predictive Control for Strategic Motion Planning in Autonomous Racing}},
author = {Reiter, Rudolf and Hoffmann, Jasper and Boedecker, Joschka and Diehl, Moritz},
booktitle = {ICML 2024 Workshops: RLControlTheory},
year = {2024},
url = {https://mlanthology.org/icmlw/2024/reiter2024icmlw-hierarchical/}
}