Learning to Constrain Policy Optimization with Virtual Trust Region
Abstract
We introduce a constrained optimization method for policy gradient reinforcement learning, which uses two trust regions to regulate each policy update. In addition to using the proximity of one single old policy as the first trust region as done by prior works, we propose forming a second trust region by constructing another virtual policy that represents a wide range of past policies. We then enforce the new policy to stay closer to the virtual policy, which is beneficial if the old policy performs poorly. We propose a mechanism to automatically build the virtual policy from a memory buffer of past policies, providing a new capability for dynamically selecting appropriate trust regions during the optimization process. Our proposed method, dubbed Memory-Constrained Policy Optimization (MCPO), is examined in diverse environments, including robotic locomotion control, navigation with sparse rewards and Atari games, consistently demonstrating competitive performance against recent on-policy constrained policy gradient methods.
Cite
Text
Le et al. "Learning to Constrain Policy Optimization with Virtual Trust Region." Neural Information Processing Systems, 2022.Markdown
[Le et al. "Learning to Constrain Policy Optimization with Virtual Trust Region." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/le2022neurips-learning/)BibTeX
@inproceedings{le2022neurips-learning,
title = {{Learning to Constrain Policy Optimization with Virtual Trust Region}},
author = {Le, Thai Hung and George, Thommen Karimpanal and Abdolshah, Majid and Nguyen, Dung and Do, Kien and Gupta, Sunil and Venkatesh, Svetha},
booktitle = {Neural Information Processing Systems},
year = {2022},
url = {https://mlanthology.org/neurips/2022/le2022neurips-learning/}
}