Deep Reinforcement Learning in Ice Hockey for Context-Aware Player Evaluation

Abstract

A variety of machine learning models have been proposed to assess the performance of players in professional sports. However, they have only a limited ability to model how player performance depends on the game context. This paper proposes a new approach to capturing game context: we apply Deep Reinforcement Learning (DRL) to learn an action-value Q function from 3M play-by-play events in the National Hockey League (NHL). The neural network representation integrates both continuous context signals and game history, using a possession-based LSTM. The learned Q-function is used to value players' actions under different game contexts. To assess a player's overall performance, we introduce a novel Game Impact Metric (GIM) that aggregates the values of the player's actions. Empirical Evaluation shows GIM is consistent throughout a play season, and correlates highly with standard success measures and future salary.

Cite

Text

Liu and Schulte. "Deep Reinforcement Learning in Ice Hockey for Context-Aware Player Evaluation." International Joint Conference on Artificial Intelligence, 2018. doi:10.24963/IJCAI.2018/478

Markdown

[Liu and Schulte. "Deep Reinforcement Learning in Ice Hockey for Context-Aware Player Evaluation." International Joint Conference on Artificial Intelligence, 2018.](https://mlanthology.org/ijcai/2018/liu2018ijcai-deep/) doi:10.24963/IJCAI.2018/478

BibTeX

@inproceedings{liu2018ijcai-deep,
  title     = {{Deep Reinforcement Learning in Ice Hockey for Context-Aware Player Evaluation}},
  author    = {Liu, Guiliang and Schulte, Oliver},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2018},
  pages     = {3442-3448},
  doi       = {10.24963/IJCAI.2018/478},
  url       = {https://mlanthology.org/ijcai/2018/liu2018ijcai-deep/}
}