Explaining Reinforcement Learning with Shapley Values

Abstract

For reinforcement learning systems to be widely adopted, their users must understand and trust them. We present a theoretical analysis of explaining reinforcement learning using Shapley values, following a principled approach from game theory for identifying the contribution of individual players to the outcome of a cooperative game. We call this general framework Shapley Values for Explaining Reinforcement Learning (SVERL). Our analysis exposes the limitations of earlier uses of Shapley values in reinforcement learning. We then develop an approach that uses Shapley values to explain agent performance. In a variety of domains, SVERL produces meaningful explanations that match and supplement human intuition.

Cite

Text

Beechey et al. "Explaining Reinforcement Learning with Shapley Values." International Conference on Machine Learning, 2023.

Markdown

[Beechey et al. "Explaining Reinforcement Learning with Shapley Values." International Conference on Machine Learning, 2023.](https://mlanthology.org/icml/2023/beechey2023icml-explaining/)

BibTeX

@inproceedings{beechey2023icml-explaining,
  title     = {{Explaining Reinforcement Learning with Shapley Values}},
  author    = {Beechey, Daniel and Smith, Thomas M. S. and Şimşek, Özgür},
  booktitle = {International Conference on Machine Learning},
  year      = {2023},
  pages     = {2003-2014},
  volume    = {202},
  url       = {https://mlanthology.org/icml/2023/beechey2023icml-explaining/}
}