Finite-Time Error Bounds for Greedy-GQ
Abstract
Greedy-GQ with linear function approximation, originally proposed in Maei et al. (in: Proceedings of the international conference on machine learning (ICML), 2010), is a value-based off-policy algorithm for optimal control in reinforcement learning, and it has a non-linear two timescale structure with non-convex objective function. This paper develops its tightest finite-time error bounds. We show that the Greedy-GQ algorithm converges as fast as O(1/T)\documentclass[12pt]minimal \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}-69pt \begin{document}$\mathcal {O}({1}/{\sqrt{T}})$\end{document} under the i.i.d. setting and O(logT/T)\documentclass[12pt]minimal \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}-69pt \begin{document}$\mathcal {O}({\log T}/{\sqrt{T}})$\end{document} under the Markovian setting. We further design variant of the vanilla Greedy-GQ algorithm using the nested-loop approach, and show that its sample complexity is O(log(1/ϵ)ϵ-2)\documentclass[12pt]minimal \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}-69pt \begin{document}$\mathcal {O}({\log (1/\epsilon )\epsilon ^{-2}})$\end{document}, which matches with the one of the vanilla Greedy-GQ. Our finite-time error bounds match with the one of the stochastic gradient descent algorithm for general smooth non-convex optimization problems, despite of its additonal challenge in the two time-scale updates. Our finite-sample analysis provides theoretical guidance on choosing step-sizes for faster convergence in practice, and suggests the trade-off between the convergence rate and the quality of the obtained policy. Our techniques provide a general approach for finite-sample analysis of non-convex two timescale value-based reinforcement learning algorithms.
Cite
Text
Wang et al. "Finite-Time Error Bounds for Greedy-GQ." Machine Learning, 2024. doi:10.1007/S10994-024-06542-XMarkdown
[Wang et al. "Finite-Time Error Bounds for Greedy-GQ." Machine Learning, 2024.](https://mlanthology.org/mlj/2024/wang2024mlj-finitetime/) doi:10.1007/S10994-024-06542-XBibTeX
@article{wang2024mlj-finitetime,
title = {{Finite-Time Error Bounds for Greedy-GQ}},
author = {Wang, Yue and Zhou, Yi and Zou, Shaofeng},
journal = {Machine Learning},
year = {2024},
pages = {5981-6018},
doi = {10.1007/S10994-024-06542-X},
volume = {113},
url = {https://mlanthology.org/mlj/2024/wang2024mlj-finitetime/}
}