Learning Nash Equilibria in Zero-Sum Stochastic Games via Entropy-Regularized Policy Approximation
Abstract
We explore the use of policy approximations to reduce the computational cost of learning Nash equilibria in zero-sum stochastic games. We propose a new Q-learning type algorithm that uses a sequence of entropy-regularized soft policies to approximate the Nash policy during the Q-function updates. We prove that under certain conditions, by updating the entropy regularization, the algorithm converges to a Nash equilibrium. We also demonstrate the proposed algorithm's ability to transfer previous training experiences, enabling the agents to adapt quickly to new environments. We provide a dynamic hyper-parameter scheduling scheme to further expedite convergence. Empirical results applied to a number of stochastic games verify that the proposed algorithm converges to the Nash equilibrium, while exhibiting a major speed-up over existing algorithms.
Cite
Text
Guan et al. "Learning Nash Equilibria in Zero-Sum Stochastic Games via Entropy-Regularized Policy Approximation." International Joint Conference on Artificial Intelligence, 2021. doi:10.24963/IJCAI.2021/339Markdown
[Guan et al. "Learning Nash Equilibria in Zero-Sum Stochastic Games via Entropy-Regularized Policy Approximation." International Joint Conference on Artificial Intelligence, 2021.](https://mlanthology.org/ijcai/2021/guan2021ijcai-learning/) doi:10.24963/IJCAI.2021/339BibTeX
@inproceedings{guan2021ijcai-learning,
title = {{Learning Nash Equilibria in Zero-Sum Stochastic Games via Entropy-Regularized Policy Approximation}},
author = {Guan, Yue and Zhang, Qifan and Tsiotras, Panagiotis},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2021},
pages = {2462-2468},
doi = {10.24963/IJCAI.2021/339},
url = {https://mlanthology.org/ijcai/2021/guan2021ijcai-learning/}
}