Learning to Score Behaviors for Guided Policy Optimization
Abstract
We introduce a new approach for comparing reinforcement learning policies, using Wasserstein distances (WDs) in a newly defined latent behavioral space. We show that by utilizing the dual formulation of the WD, we can learn score functions over policy behaviors that can in turn be used to lead policy optimization towards (or away from) (un)desired behaviors. Combined with smoothed WDs, the dual formulation allows us to devise efficient algorithms that take stochastic gradient descent steps through WD regularizers. We incorporate these regularizers into two novel on-policy algorithms, Behavior-Guided Policy Gradient and Behavior-Guided Evolution Strategies, which we demonstrate can outperform existing methods in a variety of challenging environments. We also provide an open source demo.
Cite
Text
Pacchiano et al. "Learning to Score Behaviors for Guided Policy Optimization." International Conference on Machine Learning, 2020.Markdown
[Pacchiano et al. "Learning to Score Behaviors for Guided Policy Optimization." International Conference on Machine Learning, 2020.](https://mlanthology.org/icml/2020/pacchiano2020icml-learning/)BibTeX
@inproceedings{pacchiano2020icml-learning,
title = {{Learning to Score Behaviors for Guided Policy Optimization}},
author = {Pacchiano, Aldo and Parker-Holder, Jack and Tang, Yunhao and Choromanski, Krzysztof and Choromanska, Anna and Jordan, Michael},
booktitle = {International Conference on Machine Learning},
year = {2020},
pages = {7445-7454},
volume = {119},
url = {https://mlanthology.org/icml/2020/pacchiano2020icml-learning/}
}