On the Stochastic (Variance-Reduced) Proximal Gradient Method for Regularized Expected Reward Optimization

Abstract

We consider a regularized expected reward optimization problem in the non-oblivious setting that covers many existing problems in reinforcement learning (RL). In order to solve such an optimization problem, we apply and analyze the classical stochastic proximal gradient method. In particular, the method has shown to admit an $O(\epsilon^{-4})$ sample complexity to an $\epsilon$-stationary point, under standard conditions. Since the variance of the classical stochastic gradient estimator is typically large, which slows down the convergence, we also apply an efficient stochastic variance-reduce proximal gradient method with an importance sampling based ProbAbilistic Gradient Estimator (PAGE). Our analysis shows that the sample complexity can be improved from $O(\epsilon^{-4})$ to $O(\epsilon^{-3})$ under additional conditions. Our results on the stochastic (variance-reduced) proximal gradient method match the sample complexity of their most competitive counterparts for discounted Markov decision processes under similar settings. To the best of our knowledge, the proposed methods represent a novel approach in addressing the general regularized reward optimization problem.

Cite

Text

Liang and Yang. "On the Stochastic (Variance-Reduced) Proximal Gradient Method for Regularized Expected Reward Optimization." Transactions on Machine Learning Research, 2024.

Markdown

[Liang and Yang. "On the Stochastic (Variance-Reduced) Proximal Gradient Method for Regularized Expected Reward Optimization." Transactions on Machine Learning Research, 2024.](https://mlanthology.org/tmlr/2024/liang2024tmlr-stochastic/)

BibTeX

@article{liang2024tmlr-stochastic,
  title     = {{On the Stochastic (Variance-Reduced) Proximal Gradient Method for Regularized Expected Reward Optimization}},
  author    = {Liang, Ling and Yang, Haizhao},
  journal   = {Transactions on Machine Learning Research},
  year      = {2024},
  url       = {https://mlanthology.org/tmlr/2024/liang2024tmlr-stochastic/}
}