Learning to Win by Reading Manuals in a Monte-Carlo Framework

Abstract

This paper presents a novel approach for leveraging automatically extracted textual knowledge to improve the performance of control applications such as games. Our ultimate goal is to enrich a stochastic player with high-level guidance expressed in text. Our model jointly learns to identify text that is relevant to a given game state in addition to learning game strategies guided by the selected text. Our method operates in the Monte-Carlo search framework, and learns both text analysis and game strategies based only on environment feedback. We apply our approach to the complex strategy game Civilization II using the official game manual as the text guide. Our results show that a linguistically-informed game-playing agent significantly outperforms its language-unaware counterpart, yielding a 27% absolute improvement and winning over 78% of games when playing against the built-in AI of Civilization II.

Cite

Text

Branavan et al. "Learning to Win by Reading Manuals in a Monte-Carlo Framework." Journal of Artificial Intelligence Research, 2012. doi:10.1613/JAIR.3484

Markdown

[Branavan et al. "Learning to Win by Reading Manuals in a Monte-Carlo Framework." Journal of Artificial Intelligence Research, 2012.](https://mlanthology.org/jair/2012/branavan2012jair-learning/) doi:10.1613/JAIR.3484

BibTeX

@article{branavan2012jair-learning,
  title     = {{Learning to Win by Reading Manuals in a Monte-Carlo Framework}},
  author    = {Branavan, S. R. K. and Silver, David and Barzilay, Regina},
  journal   = {Journal of Artificial Intelligence Research},
  year      = {2012},
  pages     = {661-704},
  doi       = {10.1613/JAIR.3484},
  volume    = {43},
  url       = {https://mlanthology.org/jair/2012/branavan2012jair-learning/}
}