AI Testing Should Account for Sophisticated Strategic Behaviour

Abstract

This position paper argues for two claims regarding AI testing and evaluation. First, to remain informative about deployment behaviour, evaluations need account for the possibility that AI systems understand their circumstances and reason strategically. Second, game-theoretic analysis can inform evaluation design by formalising and scrutinising the reasoning in evaluation-based safety cases. Drawing on examples from existing AI systems, a review of relevant research, and formal strategic analysis of a stylised evaluation scenario, we present evidence for these claims and motivate several research directions.

Cite

Text

Kovarik et al. "AI Testing Should Account for Sophisticated Strategic Behaviour." Advances in Neural Information Processing Systems, 2025.

Markdown

[Kovarik et al. "AI Testing Should Account for Sophisticated Strategic Behaviour." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/kovarik2025neurips-ai/)

BibTeX

@inproceedings{kovarik2025neurips-ai,
  title     = {{AI Testing Should Account for Sophisticated Strategic Behaviour}},
  author    = {Kovarik, Vojtech and Chen, Eric Olav and Petersen, Sami and Ghersengorin, Alexis and Conitzer, Vincent},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/kovarik2025neurips-ai/}
}