Achieving Constant Regret in Linear Markov Decision Processes
Abstract
We study the constant regret guarantees in reinforcement learning (RL). Our objective is to design an algorithm that incurs only finite regret over infinite episodes with high probability. We introduce an algorithm, Cert-LSVI-UCB, for misspecified linear Markov decision processes (MDPs) where both the transition kernel and the reward function can be approximated by some linear function up to misspecification level $\zeta$. At the core of Cert-LSVI-UCB is an innovative certified estimator, which facilitates a fine-grained concentration analysis for multi-phase value-targeted regression, enabling us to establish an instance-dependent regret bound that is constant w.r.t. the number of episodes. Specifically, we demonstrate that for a linear MDP characterized by a minimal suboptimality gap $\Delta$, Cert-LSVI-UCB has a cumulative regret of $\tilde{\mathcal{O}}(d^3H^5/\Delta)$ with high probability, provided that the misspecification level $\zeta$ is below $\tilde{\mathcal{O}}(\Delta / (\sqrt{d}H^2))$. Here $d$ is the dimension of the feature space and $H$ is the horizon. Remarkably, this regret bound is independent of the number of episodes $K$. To the best of our knowledge, Cert-LSVI-UCB is the first algorithm to achieve a constant, instance-dependent, high-probability regret bound in RL with linear function approximation without relying on prior distribution assumptions.
Cite
Text
Zhang et al. "Achieving Constant Regret in Linear Markov Decision Processes." Neural Information Processing Systems, 2024. doi:10.52202/079017-4154Markdown
[Zhang et al. "Achieving Constant Regret in Linear Markov Decision Processes." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/zhang2024neurips-achieving/) doi:10.52202/079017-4154BibTeX
@inproceedings{zhang2024neurips-achieving,
title = {{Achieving Constant Regret in Linear Markov Decision Processes}},
author = {Zhang, Weitong and Fan, Zhiyuan and He, Jiafan and Gu, Quanquan},
booktitle = {Neural Information Processing Systems},
year = {2024},
doi = {10.52202/079017-4154},
url = {https://mlanthology.org/neurips/2024/zhang2024neurips-achieving/}
}