DORA the Explorer: Directed Outreaching Reinforcement Action-Selection
Abstract
Exploration is a fundamental aspect of Reinforcement Learning, typically implemented using stochastic action-selection. Exploration, however, can be more efficient if directed toward gaining new world knowledge. Visit-counters have been proven useful both in practice and in theory for directed exploration. However, a major limitation of counters is their locality. While there are a few model-based solutions to this shortcoming, a model-free approach is still missing. We propose $E$-values, a generalization of counters that can be used to evaluate the propagating exploratory value over state-action trajectories. We compare our approach to commonly used RL techniques, and show that using $E$-values improves learning and performance over traditional counters. We also show how our method can be implemented with function approximation to efficiently learn continuous MDPs. We demonstrate this by showing that our approach surpasses state of the art performance in the Freeway Atari 2600 game.
Cite
Text
Fox et al. "DORA the Explorer: Directed Outreaching Reinforcement Action-Selection." International Conference on Learning Representations, 2018.Markdown
[Fox et al. "DORA the Explorer: Directed Outreaching Reinforcement Action-Selection." International Conference on Learning Representations, 2018.](https://mlanthology.org/iclr/2018/fox2018iclr-dora/)BibTeX
@inproceedings{fox2018iclr-dora,
title = {{DORA the Explorer: Directed Outreaching Reinforcement Action-Selection}},
author = {Fox, Lior and Choshen, Leshem and Loewenstein, Yonatan},
booktitle = {International Conference on Learning Representations},
year = {2018},
url = {https://mlanthology.org/iclr/2018/fox2018iclr-dora/}
}