Sample-Based Distributional Policy Gradient
Abstract
Distributional reinforcement learning (DRL) is a recent reinforcement learning framework whose success has been supported by various empirical studies. It relies on the idea of replacing the expected return with the return distribution, which captures the intrinsic randomness of the long term rewards. Most of the existing literature on DRL focuses on problems with discrete action space and value based methods. In this work, motivated by applications in control engineering and robotics where the action space is continuous, we propose the sample-based distributional policy gradient (SDPG) algorithm. It models the return distribution using samples via a reparameterization technique widely used in generative modeling. We compare SDPG with the state-of-the-art policy gradient method in DRL, distributed distributional deterministic policy gradients (D4PG). We apply SDPG and D4PG to multiple OpenAI Gym environments and observe that our algorithm shows better sample efficiency as well as higher reward for most tasks.
Cite
Text
Singh et al. "Sample-Based Distributional Policy Gradient." Proceedings of The 4th Annual Learning for Dynamics and Control Conference, 2022.Markdown
[Singh et al. "Sample-Based Distributional Policy Gradient." Proceedings of The 4th Annual Learning for Dynamics and Control Conference, 2022.](https://mlanthology.org/l4dc/2022/singh2022l4dc-samplebased/)BibTeX
@inproceedings{singh2022l4dc-samplebased,
title = {{Sample-Based Distributional Policy Gradient}},
author = {Singh, Rahul and Lee, Keuntaek and Chen, Yongxin},
booktitle = {Proceedings of The 4th Annual Learning for Dynamics and Control Conference},
year = {2022},
pages = {676-688},
volume = {168},
url = {https://mlanthology.org/l4dc/2022/singh2022l4dc-samplebased/}
}