Diverse Exploration via Conjugate Policies for Policy Gradient Methods
Abstract
We address the challenge of effective exploration while maintaining good performance in policy gradient methods. As a solution, we propose diverse exploration (DE) via conjugate policies. DE learns and deploys a set of conjugate policies which can be conveniently generated as a byproduct of conjugate gradient descent. We provide both theoretical and empirical results showing the effectiveness of DE at achieving exploration, improving policy performance, and the advantage of DE over exploration by random policy perturbations.
Cite
Text
Cohen et al. "Diverse Exploration via Conjugate Policies for Policy Gradient Methods." AAAI Conference on Artificial Intelligence, 2019. doi:10.1609/AAAI.V33I01.33013404Markdown
[Cohen et al. "Diverse Exploration via Conjugate Policies for Policy Gradient Methods." AAAI Conference on Artificial Intelligence, 2019.](https://mlanthology.org/aaai/2019/cohen2019aaai-diverse/) doi:10.1609/AAAI.V33I01.33013404BibTeX
@inproceedings{cohen2019aaai-diverse,
title = {{Diverse Exploration via Conjugate Policies for Policy Gradient Methods}},
author = {Cohen, Andrew and Qiao, Xingye and Yu, Lei and Way, Elliot and Tong, Xiangrong},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2019},
pages = {3404-3411},
doi = {10.1609/AAAI.V33I01.33013404},
url = {https://mlanthology.org/aaai/2019/cohen2019aaai-diverse/}
}