Average-Reward Learning and Planning with Options

Abstract

We extend the options framework for temporal abstraction in reinforcement learning from discounted Markov decision processes (MDPs) to average-reward MDPs. Our contributions include general convergent off-policy inter-option learning algorithms, intra-option algorithms for learning values and models, as well as sample-based planning variants of our learning algorithms. Our algorithms and convergence proofs extend those recently developed by Wan, Naik, and Sutton. We also extend the notion of option-interrupting behaviour from the discounted to the average-reward formulation. We show the efficacy of the proposed algorithms with experiments on a continuing version of the Four-Room domain.

Cite

Text

Wan et al. "Average-Reward Learning and Planning with Options." Neural Information Processing Systems, 2021.

Markdown

[Wan et al. "Average-Reward Learning and Planning with Options." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/wan2021neurips-averagereward/)

BibTeX

@inproceedings{wan2021neurips-averagereward,
  title     = {{Average-Reward Learning and Planning with Options}},
  author    = {Wan, Yi and Naik, Abhishek and Sutton, Rich},
  booktitle = {Neural Information Processing Systems},
  year      = {2021},
  url       = {https://mlanthology.org/neurips/2021/wan2021neurips-averagereward/}
}