When Waiting Is Not an Option: Learning Options with a Deliberation Cost

Abstract

Recent work has shown that temporally extended actions (options) can be learned fully end-to-end as opposed to being specified in advance. While the problem of how to learn options is increasingly well understood, the question of what good options should be has remained elusive. We formulate our answer to what good options should be in the bounded rationality framework (Simon, 1957) through the notion of deliberation cost. We then derive practical gradient-based learning algorithms to implement this objective. Our results in the Arcade Learning Environment (ALE) show increased performance and interpretability.

Cite

Text

Harb et al. "When Waiting Is Not an Option: Learning Options with a Deliberation Cost." AAAI Conference on Artificial Intelligence, 2018. doi:10.1609/AAAI.V32I1.11831

Markdown

[Harb et al. "When Waiting Is Not an Option: Learning Options with a Deliberation Cost." AAAI Conference on Artificial Intelligence, 2018.](https://mlanthology.org/aaai/2018/harb2018aaai-waiting/) doi:10.1609/AAAI.V32I1.11831

BibTeX

@inproceedings{harb2018aaai-waiting,
  title     = {{When Waiting Is Not an Option: Learning Options with a Deliberation Cost}},
  author    = {Harb, Jean and Bacon, Pierre-Luc and Klissarov, Martin and Precup, Doina},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2018},
  pages     = {3165-3172},
  doi       = {10.1609/AAAI.V32I1.11831},
  url       = {https://mlanthology.org/aaai/2018/harb2018aaai-waiting/}
}