Diverse Exploration for Fast and Safe Policy Improvement
Abstract
We study an important yet under-addressed problem of quickly and safely improving policies in online reinforcement learning domains. As its solution, we propose a novel exploration strategy - diverse exploration (DE), which learns and deploys a diverse set of safe policies to explore the environment. We provide DE theory explaining why diversity in behavior policies enables effective exploration without sacrificing exploitation. Our empirical study shows that an online policy improvement algorithm framework implementing the DE strategy can achieve both fast policy improvement and safe online performance.
Cite
Text
Cohen et al. "Diverse Exploration for Fast and Safe Policy Improvement." AAAI Conference on Artificial Intelligence, 2018. doi:10.1609/AAAI.V32I1.11758Markdown
[Cohen et al. "Diverse Exploration for Fast and Safe Policy Improvement." AAAI Conference on Artificial Intelligence, 2018.](https://mlanthology.org/aaai/2018/cohen2018aaai-diverse/) doi:10.1609/AAAI.V32I1.11758BibTeX
@inproceedings{cohen2018aaai-diverse,
title = {{Diverse Exploration for Fast and Safe Policy Improvement}},
author = {Cohen, Andrew and Yu, Lei and Wright, Robert},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2018},
pages = {2876-2883},
doi = {10.1609/AAAI.V32I1.11758},
url = {https://mlanthology.org/aaai/2018/cohen2018aaai-diverse/}
}