Quasi-Newton Trust Region Policy Optimization
Abstract
We propose a trust region method for policy optimization that employs Quasi-Newton approximation for the Hessian, called Quasi-Newton Trust Region Policy Optimization (QNTRPO). Gradient descent is the de facto algorithm for reinforcement learning tasks with continuous controls. The algorithm has achieved state-of-the-art performance when used in reinforcement learning across a wide range of tasks. However, the algorithm suffers from a number of drawbacks including: lack of stepsize selection criterion, and slow convergence. We investigate the use of a trust region method using dogleg step and a Quasi-Newton approximation for the Hessian for policy optimization. We demonstrate through numerical experiments over a wide range of challenging continuous control tasks that our particular choice is efficient in terms of number of samples and improves performance.
Cite
Text
Jha et al. "Quasi-Newton Trust Region Policy Optimization." Conference on Robot Learning, 2019.Markdown
[Jha et al. "Quasi-Newton Trust Region Policy Optimization." Conference on Robot Learning, 2019.](https://mlanthology.org/corl/2019/jha2019corl-quasinewton/)BibTeX
@inproceedings{jha2019corl-quasinewton,
title = {{Quasi-Newton Trust Region Policy Optimization}},
author = {Jha, Devesh K. and Raghunathan, Arvind U. and Romeres, Diego},
booktitle = {Conference on Robot Learning},
year = {2019},
pages = {945-954},
volume = {100},
url = {https://mlanthology.org/corl/2019/jha2019corl-quasinewton/}
}