When Should Agents Explore?

Abstract

Exploration remains a central challenge for reinforcement learning (RL). Virtually all existing methods share the feature of a *monolithic* behaviour policy that changes only gradually (at best). In contrast, the exploratory behaviours of animals and humans exhibit a rich diversity, namely including forms of *switching* between modes. This paper presents an initial study of mode-switching, non-monolithic exploration for RL. We investigate different modes to switch between, at what timescales it makes sense to switch, and what signals make for good switching triggers. We also propose practical algorithmic components that make the switching mechanism adaptive and robust, which enables flexibility without an accompanying hyper-parameter-tuning burden. Finally, we report a promising initial study on Atari, using two-mode exploration and switching at sub-episodic time-scales.

Cite

Text

Pislar et al. "When Should Agents Explore?." International Conference on Learning Representations, 2022.

Markdown

[Pislar et al. "When Should Agents Explore?." International Conference on Learning Representations, 2022.](https://mlanthology.org/iclr/2022/pislar2022iclr-agents/)

BibTeX

@inproceedings{pislar2022iclr-agents,
  title     = {{When Should Agents Explore?}},
  author    = {Pislar, Miruna and Szepesvari, David and Ostrovski, Georg and Borsa, Diana L and Schaul, Tom},
  booktitle = {International Conference on Learning Representations},
  year      = {2022},
  url       = {https://mlanthology.org/iclr/2022/pislar2022iclr-agents/}
}