General Discounting Versus Average Reward

Abstract

Consider an agent interacting with an environment in cycles. In every interaction cycle the agent is rewarded for its performance. We compare the average reward U from cycle 1 to m (average value) with the future discounted reward V from cycle k to ∞ (discounted value). We consider essentially arbitrary (non-geometric) discount sequences and arbitrary reward sequences (non-MDP environments). We show that asymptotically U for m→∞ and V for k→∞ are equal, provided both limits exist. Further, if the effective horizon grows linearly with k or faster, then the existence of the limit of U implies that the limit of V exists. Conversely, if the effective horizon grows linearly with k or slower, then existence of the limit of V implies that the limit of U exists.

Cite

Text

Hutter. "General Discounting Versus Average Reward." International Conference on Algorithmic Learning Theory, 2006. doi:10.1007/11894841_21

Markdown

[Hutter. "General Discounting Versus Average Reward." International Conference on Algorithmic Learning Theory, 2006.](https://mlanthology.org/alt/2006/hutter2006alt-general/) doi:10.1007/11894841_21

BibTeX

@inproceedings{hutter2006alt-general,
  title     = {{General Discounting Versus Average Reward}},
  author    = {Hutter, Marcus},
  booktitle = {International Conference on Algorithmic Learning Theory},
  year      = {2006},
  pages     = {244-258},
  doi       = {10.1007/11894841_21},
  url       = {https://mlanthology.org/alt/2006/hutter2006alt-general/}
}