Potential-Based Shaping in Model-Based Reinforcement Learning
Abstract
Potential-based shaping was designed as a way of introducing background knowledge into model-free reinforcement-learning algorithms. By identifying states that are likely to have high value, this approach can decrease experience complexity—the number of tri-als needed to find near-optimal behavior. An orthogo-nal way of decreasing experience complexity is to use a model-based learning approach, building and exploiting an explicit transition model. In this paper, we show how potential-based shaping can be redefined to work in the model-based setting to produce an algorithm that shares the benefits of both ideas.
Cite
Text
Asmuth et al. "Potential-Based Shaping in Model-Based Reinforcement Learning." AAAI Conference on Artificial Intelligence, 2008.Markdown
[Asmuth et al. "Potential-Based Shaping in Model-Based Reinforcement Learning." AAAI Conference on Artificial Intelligence, 2008.](https://mlanthology.org/aaai/2008/asmuth2008aaai-potential/)BibTeX
@inproceedings{asmuth2008aaai-potential,
title = {{Potential-Based Shaping in Model-Based Reinforcement Learning}},
author = {Asmuth, John and Littman, Michael L. and Zinkov, Robert},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2008},
pages = {604-609},
url = {https://mlanthology.org/aaai/2008/asmuth2008aaai-potential/}
}