Per-Decision Option Discounting

Abstract

In order to solve complex problems an agent must be able to reason over a sufficiently long horizon. Temporal abstraction, commonly modeled through options, offers the ability to reason at many timescales, but the horizon length is still determined by the discount factor of the underlying Markov Decision Process. We propose a modification to the options framework that naturally scales the agent’s horizon with option length. We show that the proposed option-step discount controls a bias-variance trade-off, with larger discounts (counter-intuitively) leading to less estimation variance.

Cite

Text

Harutyunyan et al. "Per-Decision Option Discounting." International Conference on Machine Learning, 2019.

Markdown

[Harutyunyan et al. "Per-Decision Option Discounting." International Conference on Machine Learning, 2019.](https://mlanthology.org/icml/2019/harutyunyan2019icml-perdecision/)

BibTeX

@inproceedings{harutyunyan2019icml-perdecision,
  title     = {{Per-Decision Option Discounting}},
  author    = {Harutyunyan, Anna and Vrancx, Peter and Hamel, Philippe and Nowe, Ann and Precup, Doina},
  booktitle = {International Conference on Machine Learning},
  year      = {2019},
  pages     = {2644-2652},
  volume    = {97},
  url       = {https://mlanthology.org/icml/2019/harutyunyan2019icml-perdecision/}
}