Learning Intrinsically Motivated Options to Stimulate Policy Exploration
Abstract
A Reinforcement Learning (RL) agent needs to find an optimal sequence of actions in order to maximize rewards. This requires consistent exploration of states and action sequences to ensure the policy found is optimal. One way to motivate exploration is through intrinsic rewards, i.e. agent-induced rewards to guide itself towards interesting behaviors. We propose to learn from such intrinsic rewards through exploration options, i.e. additional temporally-extended actions to call separate policies (or "Explorer" agents) associated to an intrinsic reward. We show that this method has several key advantages over the usual method of weighted sum of rewards, mainly task-transfer abilities and scalability to multiple reward functions.
Cite
Text
Bagot et al. "Learning Intrinsically Motivated Options to Stimulate Policy Exploration." ICML 2020 Workshops: LifelongML, 2020.Markdown
[Bagot et al. "Learning Intrinsically Motivated Options to Stimulate Policy Exploration." ICML 2020 Workshops: LifelongML, 2020.](https://mlanthology.org/icmlw/2020/bagot2020icmlw-learning/)BibTeX
@inproceedings{bagot2020icmlw-learning,
title = {{Learning Intrinsically Motivated Options to Stimulate Policy Exploration}},
author = {Bagot, Louis and Mets, Kevin and Latré, Steven},
booktitle = {ICML 2020 Workshops: LifelongML},
year = {2020},
url = {https://mlanthology.org/icmlw/2020/bagot2020icmlw-learning/}
}