LESSON: Learning to Integrate Exploration Strategies for Reinforcement Learning via an Option Framework
Abstract
In this paper, a unified framework for exploration in reinforcement learning (RL) is proposed based on an option-critic architecture. The proposed framework learns to integrate a set of diverse exploration strategies so that the agent can adaptively select the most effective exploration strategy to realize an effective exploration-exploitation trade-off for each given task. The effectiveness of the proposed exploration framework is demonstrated by various experiments in the MiniGrid and Atari environments.
Cite
Text
Kim et al. "LESSON: Learning to Integrate Exploration Strategies for Reinforcement Learning via an Option Framework." International Conference on Machine Learning, 2023.Markdown
[Kim et al. "LESSON: Learning to Integrate Exploration Strategies for Reinforcement Learning via an Option Framework." International Conference on Machine Learning, 2023.](https://mlanthology.org/icml/2023/kim2023icml-lesson/)BibTeX
@inproceedings{kim2023icml-lesson,
title = {{LESSON: Learning to Integrate Exploration Strategies for Reinforcement Learning via an Option Framework}},
author = {Kim, Woojun and Kim, Jeonghye and Sung, Youngchul},
booktitle = {International Conference on Machine Learning},
year = {2023},
pages = {16619-16638},
volume = {202},
url = {https://mlanthology.org/icml/2023/kim2023icml-lesson/}
}