Online Reinforcement Learning in Stochastic Games
Abstract
We study online reinforcement learning in average-reward stochastic games (SGs). An SG models a two-player zero-sum game in a Markov environment, where state transitions and one-step payoffs are determined simultaneously by a learner and an adversary. We propose the \textsc{UCSG} algorithm that achieves a sublinear regret compared to the game value when competing with an arbitrary opponent. This result improves previous ones under the same setting. The regret bound has a dependency on the \textit{diameter}, which is an intrinsic value related to the mixing property of SGs. Slightly extended, \textsc{UCSG} finds an $\varepsilon$-maximin stationary policy with a sample complexity of $\tilde{\mathcal{O}}\left(\text{poly}(1/\varepsilon)\right)$, where $\varepsilon$ is the error parameter. To the best of our knowledge, this extended result is the first in the average-reward setting. In the analysis, we develop Markov chain's perturbation bounds for mean first passage times and techniques to deal with non-stationary opponents, which may be of interest in their own right.
Cite
Text
Wei et al. "Online Reinforcement Learning in Stochastic Games." Neural Information Processing Systems, 2017.Markdown
[Wei et al. "Online Reinforcement Learning in Stochastic Games." Neural Information Processing Systems, 2017.](https://mlanthology.org/neurips/2017/wei2017neurips-online/)BibTeX
@inproceedings{wei2017neurips-online,
title = {{Online Reinforcement Learning in Stochastic Games}},
author = {Wei, Chen-Yu and Hong, Yi-Te and Lu, Chi-Jen},
booktitle = {Neural Information Processing Systems},
year = {2017},
pages = {4987-4997},
url = {https://mlanthology.org/neurips/2017/wei2017neurips-online/}
}