Online Regret Bounds for Undiscounted Continuous Reinforcement Learning
Abstract
We derive sublinear regret bounds for undiscounted reinforcement learning in continuous state space. The proposed algorithm combines state aggregation with the use of upper confidence bounds for implementing optimism in the face of uncertainty. Beside the existence of an optimal policy which satisfies the Poisson equation, the only assumptions made are Hoelder continuity of rewards and transition probabilities.
Cite
Text
Ortner and Ryabko. "Online Regret Bounds for Undiscounted Continuous Reinforcement Learning." Neural Information Processing Systems, 2012.Markdown
[Ortner and Ryabko. "Online Regret Bounds for Undiscounted Continuous Reinforcement Learning." Neural Information Processing Systems, 2012.](https://mlanthology.org/neurips/2012/ortner2012neurips-online/)BibTeX
@inproceedings{ortner2012neurips-online,
title = {{Online Regret Bounds for Undiscounted Continuous Reinforcement Learning}},
author = {Ortner, Ronald and Ryabko, Daniil},
booktitle = {Neural Information Processing Systems},
year = {2012},
pages = {1763-1771},
url = {https://mlanthology.org/neurips/2012/ortner2012neurips-online/}
}