Evolved Policy Gradients

Abstract

We propose a metalearning approach for learning gradient-based reinforcement learning (RL) algorithms. The idea is to evolve a differentiable loss function, such that an agent, which optimizes its policy to minimize this loss, will achieve high rewards. The loss is parametrized via temporal convolutions over the agent's experience. Because this loss is highly flexible in its ability to take into account the agent's history, it enables fast task learning. Empirical results show that our evolved policy gradient algorithm (EPG) achieves faster learning on several randomized environments compared to an off-the-shelf policy gradient method. We also demonstrate that EPG's learned loss can generalize to out-of-distribution test time tasks, and exhibits qualitatively different behavior from other popular metalearning algorithms.

Cite

Text

Houthooft et al. "Evolved Policy Gradients." Neural Information Processing Systems, 2018.

Markdown

[Houthooft et al. "Evolved Policy Gradients." Neural Information Processing Systems, 2018.](https://mlanthology.org/neurips/2018/houthooft2018neurips-evolved/)

BibTeX

@inproceedings{houthooft2018neurips-evolved,
  title     = {{Evolved Policy Gradients}},
  author    = {Houthooft, Rein and Chen, Yuhua and Isola, Phillip and Stadie, Bradly and Wolski, Filip and Ho, OpenAI Jonathan and Abbeel, Pieter},
  booktitle = {Neural Information Processing Systems},
  year      = {2018},
  pages     = {5400-5409},
  url       = {https://mlanthology.org/neurips/2018/houthooft2018neurips-evolved/}
}