Value-Function-Based Transfer for Reinforcement Learning Using Structure Mapping
Abstract
Transfer learning concerns applying knowledge learned in one task (the source) to improve learning another related task (the target). In this paper, we use structure mapping, a psy-chological and computational theory about analogy making, to find mappings between the source and target tasks and thus construct the transfer functional automatically. Our structure mapping algorithm is a specialized and optimized version of the structure mapping engine and uses heuristic search to find the best maximal mapping. The algorithm takes as input the source and target task specifications represented as qualitative dynamic Bayes networks, which do not need probability in-formation. We apply this method to the Keepaway task from RoboCup simulated soccer and compare the result from au-tomated transfer to that from handcoded transfer.
Cite
Text
Liu and Stone. "Value-Function-Based Transfer for Reinforcement Learning Using Structure Mapping." AAAI Conference on Artificial Intelligence, 2006.Markdown
[Liu and Stone. "Value-Function-Based Transfer for Reinforcement Learning Using Structure Mapping." AAAI Conference on Artificial Intelligence, 2006.](https://mlanthology.org/aaai/2006/liu2006aaai-value/)BibTeX
@inproceedings{liu2006aaai-value,
title = {{Value-Function-Based Transfer for Reinforcement Learning Using Structure Mapping}},
author = {Liu, Yaxin and Stone, Peter},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2006},
pages = {415-420},
url = {https://mlanthology.org/aaai/2006/liu2006aaai-value/}
}