Relational Reinforcement Learning in Infinite Mario

Abstract

Relational representations in reinforcement learning allow for the use of structural information like the presence of objects and relationships between them in the description of value functions. Through this paper, we show that such representations allow for the inclusion of background knowledge that qualitatively describes a state and can be used to design agents that demonstrate learning behavior in domains with large state and actions spaces such as computer games.

Cite

Text

Mohan and Laird. "Relational Reinforcement Learning in Infinite Mario." AAAI Conference on Artificial Intelligence, 2010. doi:10.1609/AAAI.V24I1.7783

Markdown

[Mohan and Laird. "Relational Reinforcement Learning in Infinite Mario." AAAI Conference on Artificial Intelligence, 2010.](https://mlanthology.org/aaai/2010/mohan2010aaai-relational/) doi:10.1609/AAAI.V24I1.7783

BibTeX

@inproceedings{mohan2010aaai-relational,
  title     = {{Relational Reinforcement Learning in Infinite Mario}},
  author    = {Mohan, Shiwali and Laird, John E.},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2010},
  pages     = {1953-1954},
  doi       = {10.1609/AAAI.V24I1.7783},
  url       = {https://mlanthology.org/aaai/2010/mohan2010aaai-relational/}
}