Deep Learning for Reward Design to Improve Monte Carlo Tree Search in ATARI Games
Abstract
Monte Carlo Tree Search (MCTS) methods have proven powerful in planning for sequential decision-making problems such as Go and video games, but their performance can be poor when the planning depth and sampling trajectories are limited or when the rewards are sparse. We present an adaptation of PGRD (policy-gradient for reward-design) for learning a reward-bonus function to improve UCT (a MCTS algorithm). Unlike previous applications of PGRD in which the space of reward-bonus functions was limited to linear functions of hand-coded state-action-features, we use PGRD with a multi-layer convolutional neural network to automatically learn features from raw perception as well as to adapt the non-linear reward-bonus function parameters. We also adopt a variance-reducing gradient method to improve PGRD's performance. The new method improves UCT's performance on multiple ATARI games compared to UCT without the reward bonus. Combining PGRD and Deep Learning in this way should make adapting rewards for MCTS algorithms far more widely and practically applicable than before. PDF
Cite
Text
Guo et al. "Deep Learning for Reward Design to Improve Monte Carlo Tree Search in ATARI Games." International Joint Conference on Artificial Intelligence, 2016.Markdown
[Guo et al. "Deep Learning for Reward Design to Improve Monte Carlo Tree Search in ATARI Games." International Joint Conference on Artificial Intelligence, 2016.](https://mlanthology.org/ijcai/2016/guo2016ijcai-deep/)BibTeX
@inproceedings{guo2016ijcai-deep,
title = {{Deep Learning for Reward Design to Improve Monte Carlo Tree Search in ATARI Games}},
author = {Guo, Xiaoxiao and Singh, Satinder and Lewis, Richard L. and Lee, Honglak},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2016},
pages = {1519-1525},
url = {https://mlanthology.org/ijcai/2016/guo2016ijcai-deep/}
}