A Markov Game Model for Valuing Player Actions in Ice Hockey

Abstract

A variety of advanced statistics are used to evaluate player actions in the National Hockey League, but they fail to account for the context in which an action occurs or to look ahead to the long-term effects of an action. We apply the Markov Game formalism to develop a novel approach to valuing player actions that incorporates context and lookahead. Dynamic programming is used to learn Q-functions that quantify the impact of actions on goal scoring resp. penalties. Learning is based on a massive dataset that contains over 2.8M events in the National Hockey League. The impact of player actions is found to vary widely depending on the context, with possible positive and negative effects for the same action. We show that lookahead makes a substantial difference to the action impact scores. Players are ranked according to the aggregate impact of their actions. We compare this impact ranking with previous player metrics, such as plus-minus, total points, and salary.

Cite

Text

Routley and Schulte. "A Markov Game Model for Valuing Player Actions in Ice Hockey." Conference on Uncertainty in Artificial Intelligence, 2015.

Markdown

[Routley and Schulte. "A Markov Game Model for Valuing Player Actions in Ice Hockey." Conference on Uncertainty in Artificial Intelligence, 2015.](https://mlanthology.org/uai/2015/routley2015uai-markov/)

BibTeX

@inproceedings{routley2015uai-markov,
  title     = {{A Markov Game Model for Valuing Player Actions in Ice Hockey}},
  author    = {Routley, Kurt and Schulte, Oliver},
  booktitle = {Conference on Uncertainty in Artificial Intelligence},
  year      = {2015},
  pages     = {782-791},
  url       = {https://mlanthology.org/uai/2015/routley2015uai-markov/}
}