Warm-up Free Policy Optimization: Improved Regret in Linear Markov Decision Processes

Abstract

Policy Optimization (PO) methods are among the most popular Reinforcement Learning (RL) algorithms in practice. Recently, Sherman et al. [2023a] proposed a PO-based algorithm with rate-optimal regret guarantees under the linear Markov Decision Process (MDP) model. However, their algorithm relies on a costly pure exploration warm-up phase that is hard to implement in practice. This paper eliminates this undesired warm-up phase, replacing it with a simple and efficient contraction mechanism. Our PO algorithm achieves rate-optimal regret with improved dependence on the other parameters of the problem (horizon and function approximation dimension) in two fundamental settings: adversarial losses with full-information feedback and stochastic losses with bandit feedback.

Cite

Text

Cassel and Rosenberg. "Warm-up Free Policy Optimization: Improved Regret in Linear Markov Decision Processes." Neural Information Processing Systems, 2024. doi:10.52202/079017-0108

Markdown

[Cassel and Rosenberg. "Warm-up Free Policy Optimization: Improved Regret in Linear Markov Decision Processes." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/cassel2024neurips-warmup/) doi:10.52202/079017-0108

BibTeX

@inproceedings{cassel2024neurips-warmup,
  title     = {{Warm-up Free Policy Optimization: Improved Regret in Linear Markov Decision Processes}},
  author    = {Cassel, Asaf and Rosenberg, Aviv},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-0108},
  url       = {https://mlanthology.org/neurips/2024/cassel2024neurips-warmup/}
}