Robust Learning for Repeated Stochastic Games via Meta-Gaming
Abstract
In repeated stochastic games (RSGs), an agent must quickly adapt to the behavior of previously unknown associates, who may themselves be learning. This machine-learning problem is particularly challenging due, in part, to the presence of multiple (even infinite) equilibria and inherently large strategy spaces. In this paper, we introduce a method to reduce the strategy space of two-player general-sum RSGs to a handful of expert strategies. This process, called mega, effectually reduces an RSG to a bandit problem. We show that the resulting strategy space preserves several important properties of the original RSG, thus enabling a learner to produce robust strategies within a reasonably small number of interactions. To better establish strengths and weaknesses of this approach, we empirically evaluate the resulting learning system against other algorithms in three different RSGs.
Cite
Text
Crandall. "Robust Learning for Repeated Stochastic Games via Meta-Gaming." International Joint Conference on Artificial Intelligence, 2015.Markdown
[Crandall. "Robust Learning for Repeated Stochastic Games via Meta-Gaming." International Joint Conference on Artificial Intelligence, 2015.](https://mlanthology.org/ijcai/2015/crandall2015ijcai-robust/)BibTeX
@inproceedings{crandall2015ijcai-robust,
title = {{Robust Learning for Repeated Stochastic Games via Meta-Gaming}},
author = {Crandall, Jacob W.},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2015},
pages = {3416-3422},
url = {https://mlanthology.org/ijcai/2015/crandall2015ijcai-robust/}
}