Cautious Policy Programming: Exploiting KL Regularization for Monotonic Policy Improvement in Reinforcement Learning
Abstract
In this paper, we propose cautious policy programming (CPP), a novel value-based reinforcement learning (RL) algorithm that exploits the idea of monotonic policy improvement during learning. Based on the nature of entropy-regularized RL, we derive a new entropy-regularization-aware lower bound of policy improvement that depends on the expected policy advantage function but not on state-action-space-wise maximization as in prior work. CPP leverages this lower bound as a criterion for adjusting the degree of a policy update for alleviating policy oscillation. Different from similar algorithms that are mostly theory-oriented, we also propose a novel interpolation scheme that makes CPP better scale in high dimensional control problems. We demonstrate that the proposed algorithm can trade off performance and stability in both didactic classic control problems and challenging high-dimensional Atari games.
Cite
Text
Zhu and Matsubara. "Cautious Policy Programming: Exploiting KL Regularization for Monotonic Policy Improvement in Reinforcement Learning." Machine Learning, 2023. doi:10.1007/S10994-023-06368-ZMarkdown
[Zhu and Matsubara. "Cautious Policy Programming: Exploiting KL Regularization for Monotonic Policy Improvement in Reinforcement Learning." Machine Learning, 2023.](https://mlanthology.org/mlj/2023/zhu2023mlj-cautious/) doi:10.1007/S10994-023-06368-ZBibTeX
@article{zhu2023mlj-cautious,
title = {{Cautious Policy Programming: Exploiting KL Regularization for Monotonic Policy Improvement in Reinforcement Learning}},
author = {Zhu, Lingwei and Matsubara, Takamitsu},
journal = {Machine Learning},
year = {2023},
pages = {4527-4562},
doi = {10.1007/S10994-023-06368-Z},
volume = {112},
url = {https://mlanthology.org/mlj/2023/zhu2023mlj-cautious/}
}