Nearly Minimax Optimal Regret for Learning Infinite-Horizon Average-Reward MDPs with Linear Function Approximation
Abstract
We study reinforcement learning in an infinite-horizon average-reward setting with linear function approximation for linear mixture Markov decision processes (MDPs), where the transition probability function of the underlying MDP admits a linear form over a feature mapping of the current state, action, and next state. We propose a new algorithm UCRL2-VTR, which can be seen as an extension of the UCRL2 algorithm with linear function approximation. We show that UCRL2-VTR with Bernstein-type bonus can achieve a regret of $\tilde{O}(d\sqrt{DT})$, where $d$ is the dimension of the feature mapping, $T$ is the horizon, and $D$ is the diameter of the MDP. We also prove a matching lower bound $\tilde{\Omega}(d\sqrt{DT})$, which suggests that the proposed UCRL2-VTR is minimax optimal up to logarithmic factors. To the best of our knowledge, our algorithm is the first nearly minimax optimal RL algorithm with function approximation in the infinite-horizon average-reward setting.
Cite
Text
Wu et al. "Nearly Minimax Optimal Regret for Learning Infinite-Horizon Average-Reward MDPs with Linear Function Approximation." Artificial Intelligence and Statistics, 2022.Markdown
[Wu et al. "Nearly Minimax Optimal Regret for Learning Infinite-Horizon Average-Reward MDPs with Linear Function Approximation." Artificial Intelligence and Statistics, 2022.](https://mlanthology.org/aistats/2022/wu2022aistats-nearly/)BibTeX
@inproceedings{wu2022aistats-nearly,
title = {{Nearly Minimax Optimal Regret for Learning Infinite-Horizon Average-Reward MDPs with Linear Function Approximation}},
author = {Wu, Yue and Zhou, Dongruo and Gu, Quanquan},
booktitle = {Artificial Intelligence and Statistics},
year = {2022},
pages = {3883-3913},
volume = {151},
url = {https://mlanthology.org/aistats/2022/wu2022aistats-nearly/}
}