Temporal Induced Self-Play for Stochastic Bayesian Games
Abstract
One practical requirement in solving dynamic games is to ensure that the players play well from any decision point onward. To satisfy this requirement, existing efforts focus on equilibrium refinement, but the scalability and applicability of existing techniques are limited. In this paper, we propose Temporal-Induced Self-Play (TISP), a novel reinforcement learning-based framework to find strategies with decent performances from any decision point onward. TISP uses belief-space representation, backward induction, policy learning, and non-parametric approximation. Building upon TISP, we design a policy-gradient-based algorithm TISP-PG. We prove that TISP-based algorithms can find approximate Perfect Bayesian Equilibrium in zero-sum one-sided stochastic Bayesian games with finite horizon. We test TISP-based algorithms in various games, including finitely repeated security games and a grid-world game. The results show that TISP-PG is more scalable than existing mathematical programming-based methods and significantly outperforms other learning-based methods.
Cite
Text
Chen et al. "Temporal Induced Self-Play for Stochastic Bayesian Games." International Joint Conference on Artificial Intelligence, 2021. doi:10.24963/IJCAI.2021/14Markdown
[Chen et al. "Temporal Induced Self-Play for Stochastic Bayesian Games." International Joint Conference on Artificial Intelligence, 2021.](https://mlanthology.org/ijcai/2021/chen2021ijcai-temporal/) doi:10.24963/IJCAI.2021/14BibTeX
@inproceedings{chen2021ijcai-temporal,
title = {{Temporal Induced Self-Play for Stochastic Bayesian Games}},
author = {Chen, Weizhe and Zhou, Zihan and Wu, Yi and Fang, Fei},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2021},
pages = {96-103},
doi = {10.24963/IJCAI.2021/14},
url = {https://mlanthology.org/ijcai/2021/chen2021ijcai-temporal/}
}