Combo-Action: Training Agent for FPS Game with Auxiliary Tasks

Abstract

Deep reinforcement learning (DRL) has achieved surpassing human performance on Atari games, using raw pixels and rewards to learn everything. However, first-person-shooter (FPS) games in 3D environments contain higher levels of human concepts (enemy, weapon, spatial structure, etc.) and a large action space. In this paper, we explore a novel method which can plan on temporally-extended action sequences, which we refer as Combo-Action to compress the action space. We further train a deep recurrent Q-learning network model as a high-level controller, called supervisory network, to manage the Combo-Actions. Our method can be boosted with auxiliary tasks (enemy detection and depth prediction), which enable the agent to extract high-level concepts in the FPS games. Extensive experiments show that our method is efficient in training process and outperforms previous stateof-the-art approaches by a large margin. Ablation study experiments also indicate that our method can boost the performance of the FPS agent in a reasonable way.

Cite

Text

Huang et al. "Combo-Action: Training Agent for FPS Game with Auxiliary Tasks." AAAI Conference on Artificial Intelligence, 2019. doi:10.1609/AAAI.V33I01.3301954

Markdown

[Huang et al. "Combo-Action: Training Agent for FPS Game with Auxiliary Tasks." AAAI Conference on Artificial Intelligence, 2019.](https://mlanthology.org/aaai/2019/huang2019aaai-combo/) doi:10.1609/AAAI.V33I01.3301954

BibTeX

@inproceedings{huang2019aaai-combo,
  title     = {{Combo-Action: Training Agent for FPS Game with Auxiliary Tasks}},
  author    = {Huang, Shiyu and Su, Hang and Zhu, Jun and Chen, Ting},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2019},
  pages     = {954-961},
  doi       = {10.1609/AAAI.V33I01.3301954},
  url       = {https://mlanthology.org/aaai/2019/huang2019aaai-combo/}
}