Narrowing the Gap Between Adversarial and Stochastic MDPs via Policy Optimization

Abstract

We consider the problem of learning in adversarial Markov decision processes [MDPs] with an oblivious adversary in a full-information setting. The agent interacts with an environment during $T$ episodes, each of which consists of $H$ stages, and each episode is evaluated with respect to a reward function that will be revealed only at the end of the episode. We propose an algorithm, called APO-MVP, that achieves a regret bound of order $\tilde{\mathcal{O}}(\mathrm{poly}(H)\sqrt{SAT})$, where $S$ and $A$ are sizes of the state and action spaces, respectively. This result improves upon the best-known regret bound by a factor of $\sqrt{S}$, bridging the gap between adversarial and stochastic MDPs, and matching the minimax lower bound $\Omega(\sqrt{H^3SAT})$ as far as the dependencies in $S,A,T$ are concerned. The proposed algorithm and analysis completely avoid the typical tool given by occupancy measures; instead, it performs policy optimization based only on dynamic programming and on a black-box online linear optimization strategy run over estimated advantage functions, making it easy to implement. The analysis leverages two recent techniques: policy optimization based on online linear optimization strategies (Jonckheere et al., 2023) and a refined martingale analysis of the impact on values of estimating transitions kernels (Zhang et al., 2023).

Cite

Text

Tiapkin et al. "Narrowing the Gap Between Adversarial and Stochastic MDPs via Policy Optimization." Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, 2025.

Markdown

[Tiapkin et al. "Narrowing the Gap Between Adversarial and Stochastic MDPs via Policy Optimization." Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, 2025.](https://mlanthology.org/aistats/2025/tiapkin2025aistats-narrowing/)

BibTeX

@inproceedings{tiapkin2025aistats-narrowing,
  title     = {{Narrowing the Gap Between Adversarial and Stochastic MDPs via Policy Optimization}},
  author    = {Tiapkin, Daniil and Chzhen, Evgenii and Stoltz, Gilles},
  booktitle = {Proceedings of The 28th International Conference on Artificial Intelligence and Statistics},
  year      = {2025},
  pages     = {3331-3339},
  volume    = {258},
  url       = {https://mlanthology.org/aistats/2025/tiapkin2025aistats-narrowing/}
}