Exploration in Metric State Spaces

Abstract

We present metric-E3 , a provably near-optimal algorithm for reinforcement learning in Markov decision processes in which there is a natural metric on the state space that allows the construction of accurate local models. The algorithm is a generalization of the E3 algorithm of Kearns and Singh, and assumes a black box for approximate planning. Unlike the original E3, metric-E3 finds a near optimal policy in an amount of time that does not directly depend on the size of the state space, but instead depends on the covering number of the state space. Informally, the covering number is the number of neighborhoods required for accurate local modeling. ICML Proceedings of the Twentieth International Conference on Machine Learning

Cite

Text

Kakade et al. "Exploration in Metric State Spaces." International Conference on Machine Learning, 2003.

Markdown

[Kakade et al. "Exploration in Metric State Spaces." International Conference on Machine Learning, 2003.](https://mlanthology.org/icml/2003/kakade2003icml-exploration/)

BibTeX

@inproceedings{kakade2003icml-exploration,
  title     = {{Exploration in Metric State Spaces}},
  author    = {Kakade, Sham M. and Kearns, Michael J. and Langford, John},
  booktitle = {International Conference on Machine Learning},
  year      = {2003},
  pages     = {306-312},
  url       = {https://mlanthology.org/icml/2003/kakade2003icml-exploration/}
}