Baba Is AI: Break the Rules to Beat the Benchmark
Abstract
Humans solve problems by following existing rules and procedures, and also by leaps of creativity to redefine those rules and objectives. To probe these abilities, we developed a new benchmark based on the game Baba Is You where an agent manipulates both objects in the environment and rules, represented by movable tiles with words written on them, to reach a specified goal and win the game. We test three state-of-the-art multi-modal large language models (OpenAI GPT-4o, Google Gemini-1.5-Pro and Gemini-1.5-Flash) and find that they fail dramatically when generalization requires that the rules of the game must be manipulated and combined.
Cite
Text
Cloos et al. "Baba Is AI: Break the Rules to Beat the Benchmark." ICML 2024 Workshops: LLMs_and_Cognition, 2024.Markdown
[Cloos et al. "Baba Is AI: Break the Rules to Beat the Benchmark." ICML 2024 Workshops: LLMs_and_Cognition, 2024.](https://mlanthology.org/icmlw/2024/cloos2024icmlw-baba/)BibTeX
@inproceedings{cloos2024icmlw-baba,
title = {{Baba Is AI: Break the Rules to Beat the Benchmark}},
author = {Cloos, Nathan and Jens, Meagan and Naim, Michelangelo and Kuo, Yen-Ling and Cases, Ignacio and Barbu, Andrei and Cueva, Christopher J},
booktitle = {ICML 2024 Workshops: LLMs_and_Cognition},
year = {2024},
url = {https://mlanthology.org/icmlw/2024/cloos2024icmlw-baba/}
}