Q-Learning with Nearest Neighbors

Abstract

We consider model-free reinforcement learning for infinite-horizon discounted Markov Decision Processes (MDPs) with a continuous state space and unknown transition kernel, when only a single sample path under an arbitrary policy of the system is available. We consider the Nearest Neighbor Q-Learning (NNQL) algorithm to learn the optimal Q function using nearest neighbor regression method. As the main contribution, we provide tight finite sample analysis of the convergence rate. In particular, for MDPs with a $d$-dimensional state space and the discounted factor $\gamma \in (0,1)$, given an arbitrary sample path with ``covering time'' $L$, we establish that the algorithm is guaranteed to output an $\varepsilon$-accurate estimate of the optimal Q-function using $\Ot(L/(\varepsilon^3(1-\gamma)^7))$ samples. For instance, for a well-behaved MDP, the covering time of the sample path under the purely random policy scales as $\Ot(1/\varepsilon^d),$ so the sample complexity scales as $\Ot(1/\varepsilon^{d+3}).$ Indeed, we establish a lower bound that argues that the dependence of $ \Omegat(1/\varepsilon^{d+2})$ is necessary.

Cite

Text

Shah and Xie. "Q-Learning with Nearest Neighbors." Neural Information Processing Systems, 2018.

Markdown

[Shah and Xie. "Q-Learning with Nearest Neighbors." Neural Information Processing Systems, 2018.](https://mlanthology.org/neurips/2018/shah2018neurips-qlearning/)

BibTeX

@inproceedings{shah2018neurips-qlearning,
  title     = {{Q-Learning with Nearest Neighbors}},
  author    = {Shah, Devavrat and Xie, Qiaomin},
  booktitle = {Neural Information Processing Systems},
  year      = {2018},
  pages     = {3111-3121},
  url       = {https://mlanthology.org/neurips/2018/shah2018neurips-qlearning/}
}