Learning to Play a Highly Complex Game from Human Expert Games
Abstract
When the number of possible moves in each state of a game becomes very high, standard methods for computer game playing are no longer feasible. We present an approach for learning to play such a game from human expert games. The high complexity of the action space is dealt with by collapsing the very large set of allowable actions into a small set of categories according to their semantic intent, while the complexity of the state space is handled by representing the states of collections of pieces by a few relevant features in a location-independent way. The state-action mappings implicit in the expert games are then learnt using neural networks. Experiments compare this approach to methods that have previously been applied to this domain.
Cite
Text
Kråkenes and Halck. "Learning to Play a Highly Complex Game from Human Expert Games." European Conference on Machine Learning, 2002. doi:10.1007/3-540-36755-1_18Markdown
[Kråkenes and Halck. "Learning to Play a Highly Complex Game from Human Expert Games." European Conference on Machine Learning, 2002.](https://mlanthology.org/ecmlpkdd/2002/krakenes2002ecml-learning/) doi:10.1007/3-540-36755-1_18BibTeX
@inproceedings{krakenes2002ecml-learning,
title = {{Learning to Play a Highly Complex Game from Human Expert Games}},
author = {Kråkenes, Tony and Halck, Ole Martin},
booktitle = {European Conference on Machine Learning},
year = {2002},
pages = {207-218},
doi = {10.1007/3-540-36755-1_18},
url = {https://mlanthology.org/ecmlpkdd/2002/krakenes2002ecml-learning/}
}