Model-Based Reinforcement Learning for Parameterized Action Spaces

Abstract

We propose a novel model-based reinforcement learning algorithm—Dynamics Learning and predictive control with Parameterized Actions (DLPA)—for Parameterized Action Markov Decision Processes (PAMDPs). The agent learns a parameterized-action-conditioned dynamics model and plans with a modified Model Predictive Path Integral control. We theoretically quantify the difference between the generated trajectory and the optimal trajectory during planning in terms of the value they achieved through the lens of Lipschitz Continuity. Our empirical results on several standard benchmarks show that our algorithm achieves superior sample efficiency and asymptotic performance than state-of-the-art PAMDP methods.

Cite

Text

Zhang et al. "Model-Based Reinforcement Learning for Parameterized Action Spaces." International Conference on Machine Learning, 2024.

Markdown

[Zhang et al. "Model-Based Reinforcement Learning for Parameterized Action Spaces." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/zhang2024icml-modelbased/)

BibTeX

@inproceedings{zhang2024icml-modelbased,
  title     = {{Model-Based Reinforcement Learning for Parameterized Action Spaces}},
  author    = {Zhang, Renhao and Fu, Haotian and Miao, Yilin and Konidaris, George},
  booktitle = {International Conference on Machine Learning},
  year      = {2024},
  pages     = {58935-58954},
  volume    = {235},
  url       = {https://mlanthology.org/icml/2024/zhang2024icml-modelbased/}
}