Sample Complexity of Kernel-Based Q-Learning

Abstract

Modern reinforcement learning (RL) often faces an enormous state-action space. Existing analytical results are typically for settings with a small number of state-actions, or simple models such as linearly modeled Q functions. To derive statistically efficient RL policies handling large state-action spaces, with more general Q functions, some recent works have considered nonlinear function approximation using kernel ridge regression. In this work, we derive sample complexities for kernel based Q-learning when a generative model exists. We propose a non-parametric Q-learning algorithm which finds an $\varepsilon$-optimal policy in an arbitrarily large scale discounted MDP. The sample complexity of the proposed algorithm is order optimal with respect to $\varepsilon$ and the complexity of the kernel (in terms of its information gain). To the best of our knowledge, this is the first result showing a finite sample complexity under such a general model.

Cite

Text

Yeh et al. "Sample Complexity of Kernel-Based Q-Learning." Artificial Intelligence and Statistics, 2023.

Markdown

[Yeh et al. "Sample Complexity of Kernel-Based Q-Learning." Artificial Intelligence and Statistics, 2023.](https://mlanthology.org/aistats/2023/yeh2023aistats-sample/)

BibTeX

@inproceedings{yeh2023aistats-sample,
  title     = {{Sample Complexity of Kernel-Based Q-Learning}},
  author    = {Yeh, Sing-Yuan and Chang, Fu-Chieh and Yueh, Chang-Wei and Wu, Pei-Yuan and Bernacchia, Alberto and Vakili, Sattar},
  booktitle = {Artificial Intelligence and Statistics},
  year      = {2023},
  pages     = {453-469},
  volume    = {206},
  url       = {https://mlanthology.org/aistats/2023/yeh2023aistats-sample/}
}