Revisiting the Othello World Model Hypothesis

Abstract

Li et al. (2023) used the Othello board game as a test case for the ability of GPT-2 to induce world models, and were followed up by Nanda et al. (2023b). We briefly discuss the original experiments, expanding them to include more language models with more comprehensive probing. Specifically, we analyze sequences of Othello board states and train the model to predict the next move based on previous moves. We evaluate seven language models (GPT-2, T5, Bart, Flan-T5, Mistral, LLaMA-2, and Qwen2.5) on the Othello task and conclude that these models not only learn to play Othello, but also induce the Othello board layout. We find that all models achieve up to 99% accuracy in unsupervised grounding and exhibit high similarity in the board features they learned. This provides considerably stronger evidence for the Othello World Model Hypothesis than previous works.

Cite

Text

Yuan and Søgaard. "Revisiting the Othello World Model Hypothesis." ICLR 2025 Workshops: World_Models, 2025.

Markdown

[Yuan and Søgaard. "Revisiting the Othello World Model Hypothesis." ICLR 2025 Workshops: World_Models, 2025.](https://mlanthology.org/iclrw/2025/yuan2025iclrw-revisiting/)

BibTeX

@inproceedings{yuan2025iclrw-revisiting,
  title     = {{Revisiting the Othello World Model Hypothesis}},
  author    = {Yuan, Yifei and Søgaard, Anders},
  booktitle = {ICLR 2025 Workshops: World_Models},
  year      = {2025},
  url       = {https://mlanthology.org/iclrw/2025/yuan2025iclrw-revisiting/}
}