Selecting Near-Optimal Approximate State Representations in Reinforcement Learning
Abstract
We consider a reinforcement learning setting introduced in [5] where the learner does not have explicit access to the states of the underlying Markov decision process (MDP). Instead, she has access to several models that map histories of past interactions to states. Here we improve over known regret bounds in this setting, and more importantly generalize to the case where the models given to the learner do not contain a true model resulting in an MDP representation but only approximations of it. We also give improved error bounds for state aggregation.
Cite
Text
Ortner et al. "Selecting Near-Optimal Approximate State Representations in Reinforcement Learning." International Conference on Algorithmic Learning Theory, 2014. doi:10.1007/978-3-319-11662-4_11Markdown
[Ortner et al. "Selecting Near-Optimal Approximate State Representations in Reinforcement Learning." International Conference on Algorithmic Learning Theory, 2014.](https://mlanthology.org/alt/2014/ortner2014alt-selecting/) doi:10.1007/978-3-319-11662-4_11BibTeX
@inproceedings{ortner2014alt-selecting,
title = {{Selecting Near-Optimal Approximate State Representations in Reinforcement Learning}},
author = {Ortner, Ronald and Maillard, Odalric-Ambrym and Ryabko, Daniil},
booktitle = {International Conference on Algorithmic Learning Theory},
year = {2014},
pages = {140-154},
doi = {10.1007/978-3-319-11662-4_11},
url = {https://mlanthology.org/alt/2014/ortner2014alt-selecting/}
}