Hyperbolic Discounting and Learning over Multiple Horizons

Abstract

Reinforcement learning (RL) typically defines a discount factor as part of the Markov Decision Process. The discount factor values future rewards by an exponential scheme that leads to theoretical convergence guarantees of the Bellman equation. However, evidence from psychology, economics and neuroscience suggests that humans and animals instead have hyperbolic time-preferences. Here we extend earlier work of Kurth-Nelson and Redish and propose an efficient deep reinforcement learning agent that acts via hyperbolic discounting and other non-exponential discount mechanisms. We demonstrate that a simple approach approximates hyperbolic discount functions while still using familiar temporal-difference learning techniques in RL. Additionally, and independent of hyperbolic discounting, we make a surprising discovery that simultaneously learning value functions over multiple time-horizons is an effective auxiliary task which often improves over state-of-the-art methods.

Cite

Text

Fedus et al. "Hyperbolic Discounting and Learning over Multiple Horizons." International Conference on Learning Representations, 2020.

Markdown

[Fedus et al. "Hyperbolic Discounting and Learning over Multiple Horizons." International Conference on Learning Representations, 2020.](https://mlanthology.org/iclr/2020/fedus2020iclr-hyperbolic/)

BibTeX

@inproceedings{fedus2020iclr-hyperbolic,
  title     = {{Hyperbolic Discounting and Learning over Multiple Horizons}},
  author    = {Fedus, William and Gelada, Carles and Bengio, Yoshua and Bellemare, Marc G. and Larochelle, Hugo},
  booktitle = {International Conference on Learning Representations},
  year      = {2020},
  url       = {https://mlanthology.org/iclr/2020/fedus2020iclr-hyperbolic/}
}