Online Double Oracle
Abstract
Solving strategic games with huge action spaces is a critical yet under-explored topic in economics, operations research and artificial intelligence. This paper proposes new learning algorithms for solving two-player zero-sum normal-form games where the number of pure strategies is prohibitively large. Specifically, we combine no-regret analysis from online learning with Double Oracle (DO) from game theory. Our method---\emph{Online Double Oracle (ODO)}---is provably convergent to a Nash equilibrium (NE). Most importantly, unlike normal DO, ODO is \emph{rational} in the sense that each agent in ODO can exploit a strategic adversary with a regret bound of $\mathcal{O}(\sqrt{ k \log(k)/T})$, where $k$ is not the total number of pure strategies, but rather the size of \emph{effective strategy set}. In many applications, we empirically show that $k$ is linearly dependent on the support size of the NE. On tens of different real-world matrix games, ODO outperforms DO, PSRO, and no-regret algorithms such as Multiplicative Weights Update by a significant margin, both in terms of convergence rate to a NE, and average payoff against strategic adversaries.
Cite
Text
Dinh et al. "Online Double Oracle." Transactions on Machine Learning Research, 2022.Markdown
[Dinh et al. "Online Double Oracle." Transactions on Machine Learning Research, 2022.](https://mlanthology.org/tmlr/2022/dinh2022tmlr-online/)BibTeX
@article{dinh2022tmlr-online,
title = {{Online Double Oracle}},
author = {Dinh, Le Cong and McAleer, Stephen Marcus and Tian, Zheng and Perez-Nieves, Nicolas and Slumbers, Oliver and Mguni, David Henry and Wang, Jun and Ammar, Haitham Bou and Yang, Yaodong},
journal = {Transactions on Machine Learning Research},
year = {2022},
url = {https://mlanthology.org/tmlr/2022/dinh2022tmlr-online/}
}