Abstraction Selection in Model-Based Reinforcement Learning

Abstract

State abstractions are often used to reduce the complexity of model-based reinforcement learning when only limited quantities of data are available. However, choosing the appropriate level of abstraction is an important problem in practice. Existing approaches have theoretical guarantees only under strong assumptions on the domain or asymptotically large amounts of data, but in this paper we propose a simple algorithm based on statistical hypothesis testing that comes with a finite-sample guarantee under assumptions on candidate abstractions. Our algorithm trades off the low approximation error of finer abstractions against the low estimation error of coarser abstractions, resulting in a loss bound that depends only on the quality of the best available abstraction and is polynomial in planning horizon.

Cite

Text

Jiang et al. "Abstraction Selection in Model-Based Reinforcement Learning." International Conference on Machine Learning, 2015.

Markdown

[Jiang et al. "Abstraction Selection in Model-Based Reinforcement Learning." International Conference on Machine Learning, 2015.](https://mlanthology.org/icml/2015/jiang2015icml-abstraction/)

BibTeX

@inproceedings{jiang2015icml-abstraction,
  title     = {{Abstraction Selection in Model-Based Reinforcement Learning}},
  author    = {Jiang, Nan and Kulesza, Alex and Singh, Satinder},
  booktitle = {International Conference on Machine Learning},
  year      = {2015},
  pages     = {179-188},
  volume    = {37},
  url       = {https://mlanthology.org/icml/2015/jiang2015icml-abstraction/}
}