Playing FPS Games with Environment-Aware Hierarchical Reinforcement Learning
Abstract
Learning rational behaviors in First-person-shooter (FPS) games is a challenging task for Reinforcement Learning (RL) with the primary difficulties of huge action space and insufficient exploration. To address this, we propose a hierarchical agent based on combined options with intrinsic rewards to drive exploration. Specifically, we present a hierarchical model that works in a manager-worker fashion over two levels of hierarchy. The high-level manager learns a policy over options, and the low-level workers, motivated by intrinsic reward, learn to execute the options. Performance is further improved with environmental signals appropriately harnessed. Extensive experiments demonstrate that our trained bot significantly outperforms the alternative RL-based models on FPS games requiring maze solving and combat skills, etc. Notably, we achieved first place in VDAIC 2018 Track(1).
Cite
Text
Song et al. "Playing FPS Games with Environment-Aware Hierarchical Reinforcement Learning." International Joint Conference on Artificial Intelligence, 2019. doi:10.24963/IJCAI.2019/482Markdown
[Song et al. "Playing FPS Games with Environment-Aware Hierarchical Reinforcement Learning." International Joint Conference on Artificial Intelligence, 2019.](https://mlanthology.org/ijcai/2019/song2019ijcai-playing/) doi:10.24963/IJCAI.2019/482BibTeX
@inproceedings{song2019ijcai-playing,
title = {{Playing FPS Games with Environment-Aware Hierarchical Reinforcement Learning}},
author = {Song, Shihong and Weng, Jiayi and Su, Hang and Yan, Dong and Zou, Haosheng and Zhu, Jun},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2019},
pages = {3475-3482},
doi = {10.24963/IJCAI.2019/482},
url = {https://mlanthology.org/ijcai/2019/song2019ijcai-playing/}
}