Efficient Reinforcement Learning with Relocatable Action Models
Abstract
Realistic domains for learning possess regularities that make it possible to generalize experience across related states. This paper explores an environment-modeling framework that rep-resents transitions as state-independent outcomes that are common to all states that share the same type. We analyze a set of novel learning problems that arise in this framework, providing lower and upper bounds. We single out one partic-ular variant of practical interest and provide an efficient algo-rithm and experimental results in both simulated and robotic environments.
Cite
Text
Leffler et al. "Efficient Reinforcement Learning with Relocatable Action Models." AAAI Conference on Artificial Intelligence, 2007.Markdown
[Leffler et al. "Efficient Reinforcement Learning with Relocatable Action Models." AAAI Conference on Artificial Intelligence, 2007.](https://mlanthology.org/aaai/2007/leffler2007aaai-efficient/)BibTeX
@inproceedings{leffler2007aaai-efficient,
title = {{Efficient Reinforcement Learning with Relocatable Action Models}},
author = {Leffler, Bethany R. and Littman, Michael L. and Edmunds, Timothy},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2007},
pages = {572-577},
url = {https://mlanthology.org/aaai/2007/leffler2007aaai-efficient/}
}