The Option-Critic Architecture
Abstract
Temporal abstraction is key to scaling up learning and planning in reinforcement learning. While planning with temporally extended actions is well understood, creating such abstractions autonomously from data has remained challenging.We tackle this problem in the framework of options [Sutton,Precup and Singh, 1999; Precup, 2000]. We derive policy gradient theorems for options and propose a new option-critic architecture capable of learning both the internal policies and the termination conditions of options, in tandem with the policy over options, and without the need to provide any additional rewards or subgoals. Experimental results in both discrete and continuous environments showcase the flexibility and efficiency of the framework.
Cite
Text
Bacon et al. "The Option-Critic Architecture." AAAI Conference on Artificial Intelligence, 2017. doi:10.1609/AAAI.V31I1.10916Markdown
[Bacon et al. "The Option-Critic Architecture." AAAI Conference on Artificial Intelligence, 2017.](https://mlanthology.org/aaai/2017/bacon2017aaai-option/) doi:10.1609/AAAI.V31I1.10916BibTeX
@inproceedings{bacon2017aaai-option,
title = {{The Option-Critic Architecture}},
author = {Bacon, Pierre-Luc and Harb, Jean and Precup, Doina},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2017},
pages = {1726-1734},
doi = {10.1609/AAAI.V31I1.10916},
url = {https://mlanthology.org/aaai/2017/bacon2017aaai-option/}
}