Game Theory with Simulation of Other Players

Abstract

Game-theoretic interactions with AI agents could differ from traditional human-human interactions in various ways. One such difference is that it may be possible to simulate an AI agent (for example because its source code is known), which allows others to accurately predict the agent's actions. This could lower the bar for trust and cooperation. In this paper, we first formally define games in which one player can simulate another at a cost, and derive some basic properties of such games. Then, we prove a number of results for such games, including: (1) introducing simulation into generic-payoff normal-form games makes them easier to solve; (2) if the only obstacle to cooperation is a lack of trust in the possibly-simulated agent, simulation enables equilibria that improve the outcome for both agents; and (3) however, there are settings where introducing simulation results in strictly worse outcomes for both players.

Cite

Text

Kovarík et al. "Game Theory with Simulation of Other Players." International Joint Conference on Artificial Intelligence, 2023. doi:10.24963/IJCAI.2023/312

Markdown

[Kovarík et al. "Game Theory with Simulation of Other Players." International Joint Conference on Artificial Intelligence, 2023.](https://mlanthology.org/ijcai/2023/kovarik2023ijcai-game/) doi:10.24963/IJCAI.2023/312

BibTeX

@inproceedings{kovarik2023ijcai-game,
  title     = {{Game Theory with Simulation of Other Players}},
  author    = {Kovarík, Vojtech and Oesterheld, Caspar and Conitzer, Vincent},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2023},
  pages     = {2800-2807},
  doi       = {10.24963/IJCAI.2023/312},
  url       = {https://mlanthology.org/ijcai/2023/kovarik2023ijcai-game/}
}