Rate-Optimal Policy Optimization for Linear Markov Decision Processes

Abstract

We study regret minimization in online episodic linear Markov Decision Processes, and propose a policy optimization algorithm that is computationally efficient, and obtains rate optimal $\widetilde O (\sqrt K)$ regret where $K$ denotes the number of episodes. Our work is the first to establish the optimal rate (in terms of $K$) of convergence in the stochastic setting with bandit feedback using a policy optimization based approach, and the first to establish the optimal rate in the adversarial setup with full information feedback, for which no algorithm with an optimal rate guarantee was previously known.

Cite

Text

Sherman et al. "Rate-Optimal Policy Optimization for Linear Markov Decision Processes." International Conference on Machine Learning, 2024.

Markdown

[Sherman et al. "Rate-Optimal Policy Optimization for Linear Markov Decision Processes." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/sherman2024icml-rateoptimal/)

BibTeX

@inproceedings{sherman2024icml-rateoptimal,
  title     = {{Rate-Optimal Policy Optimization for Linear Markov Decision Processes}},
  author    = {Sherman, Uri and Cohen, Alon and Koren, Tomer and Mansour, Yishay},
  booktitle = {International Conference on Machine Learning},
  year      = {2024},
  pages     = {44815-44837},
  volume    = {235},
  url       = {https://mlanthology.org/icml/2024/sherman2024icml-rateoptimal/}
}