Transferring Instances for Model-Based Reinforcement Learning
Abstract
Reinforcement learning agents typically require a significant amount of data before performing well on complex tasks. Transfer learning methods have made progress reducing sample complexity, but they have primarily been applied to model-free learning methods, not more data-efficient model-based learning methods. This paper introduces timbrel , a novel method capable of transferring information effectively into a model-based reinforcement learning algorithm. We demonstrate that timbrel can significantly improve the sample efficiency and asymptotic performance of a model-based algorithm when learning in a continuous state space. Additionally, we conduct experiments to test the limits of timbrel ’s effectiveness.
Cite
Text
Taylor et al. "Transferring Instances for Model-Based Reinforcement Learning." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2008. doi:10.1007/978-3-540-87481-2_32Markdown
[Taylor et al. "Transferring Instances for Model-Based Reinforcement Learning." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2008.](https://mlanthology.org/ecmlpkdd/2008/taylor2008ecmlpkdd-transferring/) doi:10.1007/978-3-540-87481-2_32BibTeX
@inproceedings{taylor2008ecmlpkdd-transferring,
title = {{Transferring Instances for Model-Based Reinforcement Learning}},
author = {Taylor, Matthew E. and Jong, Nicholas K. and Stone, Peter},
booktitle = {European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases},
year = {2008},
pages = {488-505},
doi = {10.1007/978-3-540-87481-2_32},
url = {https://mlanthology.org/ecmlpkdd/2008/taylor2008ecmlpkdd-transferring/}
}