Implementation Matters in Deep RL: A Case Study on PPO and TRPO

Abstract

We study the roots of algorithmic progress in deep policy gradient algorithms through a case study on two popular algorithms: Proximal Policy Optimization (PPO) and Trust Region Policy Optimization (TRPO). Specifically, we investigate the consequences of "code-level optimizations:" algorithm augmentations found only in implementations or described as auxiliary details to the core algorithm. Seemingly of secondary importance, such optimizations turn out to have a major impact on agent behavior. Our results show that they (a) are responsible for most of PPO's gain in cumulative reward over TRPO, and (b) fundamentally change how RL methods function. These insights show the difficulty, and importance, of attributing performance gains in deep reinforcement learning.

Cite

Text

Engstrom et al. "Implementation Matters in Deep RL: A Case Study on PPO and TRPO." International Conference on Learning Representations, 2020.

Markdown

[Engstrom et al. "Implementation Matters in Deep RL: A Case Study on PPO and TRPO." International Conference on Learning Representations, 2020.](https://mlanthology.org/iclr/2020/engstrom2020iclr-implementation/)

BibTeX

@inproceedings{engstrom2020iclr-implementation,
  title     = {{Implementation Matters in Deep RL: A Case Study on PPO and TRPO}},
  author    = {Engstrom, Logan and Ilyas, Andrew and Santurkar, Shibani and Tsipras, Dimitris and Janoos, Firdaus and Rudolph, Larry and Madry, Aleksander},
  booktitle = {International Conference on Learning Representations},
  year      = {2020},
  url       = {https://mlanthology.org/iclr/2020/engstrom2020iclr-implementation/}
}