SCC: An Efficient Deep Reinforcement Learning Agent Mastering the Game of StarCraft II

Abstract

AlphaStar, the AI that reaches GrandMaster level in StarCraft II, is a remarkable milestone demonstrating what deep reinforcement learning can achieve in complex Real-Time Strategy (RTS) games. However, the complexities of the game, algorithms and systems, and especially the tremendous amount of computation needed are big obstacles for the community to conduct further research in this direction. We propose a deep reinforcement learning agent, StarCraft Commander (SCC). With order of magnitude less computation, it demonstrates top human performance defeating GrandMaster players in test matches and top professional players in a live event. Moreover, it shows strong robustness to various human strategies and discovers novel strategies unseen from human plays. In this paper, we’ll share the key insights and optimizations on efficient imitation learning and reinforcement learning for StarCraft II full game.

Cite

Text

Wang et al. "SCC: An Efficient Deep Reinforcement Learning Agent Mastering the Game of StarCraft II." International Conference on Machine Learning, 2021.

Markdown

[Wang et al. "SCC: An Efficient Deep Reinforcement Learning Agent Mastering the Game of StarCraft II." International Conference on Machine Learning, 2021.](https://mlanthology.org/icml/2021/wang2021icml-scc/)

BibTeX

@inproceedings{wang2021icml-scc,
  title     = {{SCC: An Efficient Deep Reinforcement Learning Agent Mastering the Game of StarCraft II}},
  author    = {Wang, Xiangjun and Song, Junxiao and Qi, Penghui and Peng, Peng and Tang, Zhenkun and Zhang, Wei and Li, Weimin and Pi, Xiongjun and He, Jujie and Gao, Chao and Long, Haitao and Yuan, Quan},
  booktitle = {International Conference on Machine Learning},
  year      = {2021},
  pages     = {10905-10915},
  volume    = {139},
  url       = {https://mlanthology.org/icml/2021/wang2021icml-scc/}
}