Hybrid Actor-Critic Reinforcement Learning in Parameterized Action Space
Abstract
In this paper we propose a hybrid architecture of actor-critic algorithms for reinforcement learning in parameterized action space, which consists of multiple parallel sub-actor networks to decompose the structured action space into simpler action spaces along with a critic network to guide the training of all sub-actor networks. While this paper is mainly focused on parameterized action space, the proposed architecture, which we call hybrid actor-critic, can be extended for more general action spaces which has a hierarchical structure. We present an instance of the hybrid actor-critic architecture based on proximal policy optimization (PPO), which we refer to as hybrid proximal policy optimization (H-PPO). Our experiments test H-PPO on a collection of tasks with parameterized action space, where H-PPO demonstrates superior performance over previous methods of parameterized action reinforcement learning.
Cite
Text
Fan et al. "Hybrid Actor-Critic Reinforcement Learning in Parameterized Action Space." International Joint Conference on Artificial Intelligence, 2019. doi:10.24963/IJCAI.2019/316Markdown
[Fan et al. "Hybrid Actor-Critic Reinforcement Learning in Parameterized Action Space." International Joint Conference on Artificial Intelligence, 2019.](https://mlanthology.org/ijcai/2019/fan2019ijcai-hybrid/) doi:10.24963/IJCAI.2019/316BibTeX
@inproceedings{fan2019ijcai-hybrid,
title = {{Hybrid Actor-Critic Reinforcement Learning in Parameterized Action Space}},
author = {Fan, Zhou and Su, Rui and Zhang, Weinan and Yu, Yong},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2019},
pages = {2279-2285},
doi = {10.24963/IJCAI.2019/316},
url = {https://mlanthology.org/ijcai/2019/fan2019ijcai-hybrid/}
}