Approximate Exploitability: Learning a Best Response

Abstract

Researchers have shown that neural networks are vulnerable to adversarial examples and subtle environment changes. The resulting errors can look like blunders to humans, eroding trust in these agents. In prior games research, agent evaluation often focused on the in-practice game outcomes. Such evaluation typically fails to evaluate robustness to worst-case outcomes. Computer poker research has examined how to assess such worst-case performance. Unfortunately, exact computation is infeasible with larger domains, and existing approximations are poker-specific. We introduce ISMCTS-BR, a scalable search-based deep reinforcement learning algorithm for learning a best response to an agent, approximating worst-case performance. We demonstrate the technique in several games against a variety of agents, including several AlphaZero-based agents. Supplementary material is available at https://arxiv.org/abs/2004.09677.

Cite

Text

Timbers et al. "Approximate Exploitability: Learning a Best Response." International Joint Conference on Artificial Intelligence, 2022. doi:10.24963/IJCAI.2022/484

Markdown

[Timbers et al. "Approximate Exploitability: Learning a Best Response." International Joint Conference on Artificial Intelligence, 2022.](https://mlanthology.org/ijcai/2022/timbers2022ijcai-approximate/) doi:10.24963/IJCAI.2022/484

BibTeX

@inproceedings{timbers2022ijcai-approximate,
  title     = {{Approximate Exploitability: Learning a Best Response}},
  author    = {Timbers, Finbarr and Bard, Nolan and Lockhart, Edward and Lanctot, Marc and Schmid, Martin and Burch, Neil and Schrittwieser, Julian and Hubert, Thomas and Bowling, Michael},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2022},
  pages     = {3487-3493},
  doi       = {10.24963/IJCAI.2022/484},
  url       = {https://mlanthology.org/ijcai/2022/timbers2022ijcai-approximate/}
}