A Sharp Analysis of Model-Based Reinforcement Learning with Self-Play

Abstract

Model-based algorithms—algorithms that explore the environment through building and utilizing an estimated model—are widely used in reinforcement learning practice and theoretically shown to achieve optimal sample efficiency for single-agent reinforcement learning in Markov Decision Processes (MDPs). However, for multi-agent reinforcement learning in Markov games, the current best known sample complexity for model-based algorithms is rather suboptimal and compares unfavorably against recent model-free approaches. In this paper, we present a sharp analysis of model-based self-play algorithms for multi-agent Markov games. We design an algorithm \emph{Optimistic Nash Value Iteration} (Nash-VI) for two-player zero-sum Markov games that is able to output an $\epsilon$-approximate Nash policy in $\tilde{\mathcal{O}}(H^3SAB/\epsilon^2)$ episodes of game playing, where $S$ is the number of states, $A,B$ are the number of actions for the two players respectively, and $H$ is the horizon length. This significantly improves over the best known model-based guarantee of $\tilde{\mathcal{O}}(H^4S^2AB/\epsilon^2)$, and is the first that matches the information-theoretic lower bound $\Omega(H^3S(A+B)/\epsilon^2)$ except for a $\min\{A,B\}$ factor. In addition, our guarantee compares favorably against the best known model-free algorithm if $\min\{A,B\}=o(H^3)$, and outputs a single Markov policy while existing sample-efficient model-free algorithms output a nested mixture of Markov policies that is in general non-Markov and rather inconvenient to store and execute. We further adapt our analysis to designing a provably efficient task-agnostic algorithm for zero-sum Markov games, and designing the first line of provably sample-efficient algorithms for multi-player general-sum Markov games.

Cite

Text

Liu et al. "A Sharp Analysis of Model-Based Reinforcement Learning with Self-Play." International Conference on Machine Learning, 2021.

Markdown

[Liu et al. "A Sharp Analysis of Model-Based Reinforcement Learning with Self-Play." International Conference on Machine Learning, 2021.](https://mlanthology.org/icml/2021/liu2021icml-sharp/)

BibTeX

@inproceedings{liu2021icml-sharp,
  title     = {{A Sharp Analysis of Model-Based Reinforcement Learning with Self-Play}},
  author    = {Liu, Qinghua and Yu, Tiancheng and Bai, Yu and Jin, Chi},
  booktitle = {International Conference on Machine Learning},
  year      = {2021},
  pages     = {7001-7010},
  volume    = {139},
  url       = {https://mlanthology.org/icml/2021/liu2021icml-sharp/}
}