Trust Region Policy Optimization
Abstract
In this article, we describe a method for optimizing control policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified scheme, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
Cite
Text
Schulman et al. "Trust Region Policy Optimization." International Conference on Machine Learning, 2015.Markdown
[Schulman et al. "Trust Region Policy Optimization." International Conference on Machine Learning, 2015.](https://mlanthology.org/icml/2015/schulman2015icml-trust/)BibTeX
@inproceedings{schulman2015icml-trust,
title = {{Trust Region Policy Optimization}},
author = {Schulman, John and Levine, Sergey and Abbeel, Pieter and Jordan, Michael and Moritz, Philipp},
booktitle = {International Conference on Machine Learning},
year = {2015},
pages = {1889-1897},
volume = {37},
url = {https://mlanthology.org/icml/2015/schulman2015icml-trust/}
}