Temporally-Extended Ε-Greedy Exploration
Abstract
Recent work on exploration in reinforcement learning (RL) has led to a series of increasingly complex solutions to the problem. This increase in complexity often comes at the expense of generality. Recent empirical studies suggest that, when applied to a broader set of domains, some sophisticated exploration methods are outperformed by simpler counterparts, such as ε-greedy. In this paper we propose an exploration algorithm that retains the simplicity of ε-greedy while reducing dithering. We build on a simple hypothesis: the main limitation of ε-greedy exploration is its lack of temporal persistence, which limits its ability to escape local optima. We propose a temporally extended form of ε-greedy that simply repeats the sampled action for a random duration. It turns out that, for many duration distributions, this suffices to improve exploration on a large set of domains. Interestingly, a class of distributions inspired by ecological models of animal foraging behaviour yields particularly strong performance.
Cite
Text
Dabney et al. "Temporally-Extended Ε-Greedy Exploration." International Conference on Learning Representations, 2021.Markdown
[Dabney et al. "Temporally-Extended Ε-Greedy Exploration." International Conference on Learning Representations, 2021.](https://mlanthology.org/iclr/2021/dabney2021iclr-temporallyextended/)BibTeX
@inproceedings{dabney2021iclr-temporallyextended,
title = {{Temporally-Extended Ε-Greedy Exploration}},
author = {Dabney, Will and Ostrovski, Georg and Barreto, Andre},
booktitle = {International Conference on Learning Representations},
year = {2021},
url = {https://mlanthology.org/iclr/2021/dabney2021iclr-temporallyextended/}
}