General Game Learning Using Knowledge Transfer
Abstract
We present a reinforcement learning game player that can interact with a General Game Playing system and transfer knowledge learned in one game to expedite learning in many other games. We use the technique of value-function transfer where general features are extracted from the state space of a previous game and matched with the completely different state space of a new game. To capture the underlying similarity of vastly disparate state spaces arising from different games, we use a game-tree lookahead structure for features. We show that such feature-based value function transfer learns superior policies faster than a reinforcement learning agent that does not use knowledge transfer. Furthermore, knowledge transfer using lookahead features can capture opponent-specific value-functions, i.e. can exploit an opponent's weaknesses to learn faster than a reinforcement learner that uses lookahead with minimax (pessimistic) search against the same opponent. URL: http://www.cs.utexas.edu/~banerjee/banerjee.pdf
Cite
Text
Banerjee and Stone. "General Game Learning Using Knowledge Transfer." International Joint Conference on Artificial Intelligence, 2007.Markdown
[Banerjee and Stone. "General Game Learning Using Knowledge Transfer." International Joint Conference on Artificial Intelligence, 2007.](https://mlanthology.org/ijcai/2007/banerjee2007ijcai-general/)BibTeX
@inproceedings{banerjee2007ijcai-general,
title = {{General Game Learning Using Knowledge Transfer}},
author = {Banerjee, Bikramjit and Stone, Peter},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2007},
pages = {672-677},
url = {https://mlanthology.org/ijcai/2007/banerjee2007ijcai-general/}
}