Clipped Action Policy Gradient
Abstract
Many continuous control tasks have bounded action spaces. When policy gradient methods are applied to such tasks, out-of-bound actions need to be clipped before execution, while policies are usually optimized as if the actions are not clipped. We propose a policy gradient estimator that exploits the knowledge of actions being clipped to reduce the variance in estimation. We prove that our estimator, named clipped action policy gradient (CAPG), is unbiased and achieves lower variance than the conventional estimator that ignores action bounds. Experimental results demonstrate that CAPG generally outperforms the conventional estimator, indicating that it is a better policy gradient estimator for continuous control tasks. The source code is available at https://github.com/pfnet-research/capg.
Cite
Text
Fujita and Maeda. "Clipped Action Policy Gradient." International Conference on Machine Learning, 2018.Markdown
[Fujita and Maeda. "Clipped Action Policy Gradient." International Conference on Machine Learning, 2018.](https://mlanthology.org/icml/2018/fujita2018icml-clipped/)BibTeX
@inproceedings{fujita2018icml-clipped,
title = {{Clipped Action Policy Gradient}},
author = {Fujita, Yasuhiro and Maeda, Shin-ichi},
booktitle = {International Conference on Machine Learning},
year = {2018},
pages = {1597-1606},
volume = {80},
url = {https://mlanthology.org/icml/2018/fujita2018icml-clipped/}
}