Efficient Wasserstein Natural Gradients for Reinforcement Learning

Abstract

A novel optimization approach is proposed for application to policy gradient methods and evolution strategies for reinforcement learning (RL). The procedure uses a computationally efficient \emph{Wasserstein natural gradient} (WNG) descent that takes advantage of the geometry induced by a Wasserstein penalty to speed optimization. This method follows the recent theme in RL of including divergence penalties in the objective to establish trust regions. Experiments on challenging tasks demonstrate improvements in both computational cost and performance over advanced baselines.

Cite

Text

Moskovitz et al. "Efficient Wasserstein Natural Gradients for Reinforcement Learning." International Conference on Learning Representations, 2021.

Markdown

[Moskovitz et al. "Efficient Wasserstein Natural Gradients for Reinforcement Learning." International Conference on Learning Representations, 2021.](https://mlanthology.org/iclr/2021/moskovitz2021iclr-efficient/)

BibTeX

@inproceedings{moskovitz2021iclr-efficient,
  title     = {{Efficient Wasserstein Natural Gradients for Reinforcement Learning}},
  author    = {Moskovitz, Ted and Arbel, Michael and Huszar, Ferenc and Gretton, Arthur},
  booktitle = {International Conference on Learning Representations},
  year      = {2021},
  url       = {https://mlanthology.org/iclr/2021/moskovitz2021iclr-efficient/}
}