Taylor Expansion Policy Optimization

Abstract

In this work, we investigate the application of Taylor expansions in reinforcement learning. In particular, we propose Taylor Expansion Policy Optimization, a policy optimization formalism that generalizes prior work as a first-order special case. We also show that Taylor expansions intimately relate to off-policy evaluation. Finally, we show that this new formulation entails modifications which improve the performance of several state-of-the-art distributed algorithms.

Cite

Text

Tang et al. "Taylor Expansion Policy Optimization." International Conference on Machine Learning, 2020.

Markdown

[Tang et al. "Taylor Expansion Policy Optimization." International Conference on Machine Learning, 2020.](https://mlanthology.org/icml/2020/tang2020icml-taylor/)

BibTeX

@inproceedings{tang2020icml-taylor,
  title     = {{Taylor Expansion Policy Optimization}},
  author    = {Tang, Yunhao and Valko, Michal and Munos, Remi},
  booktitle = {International Conference on Machine Learning},
  year      = {2020},
  pages     = {9397-9406},
  volume    = {119},
  url       = {https://mlanthology.org/icml/2020/tang2020icml-taylor/}
}