Causal Shapley Values: Exploiting Causal Knowledge to Explain Individual Predictions of Complex Models
Abstract
Shapley values underlie one of the most popular model-agnostic methods within explainable artificial intelligence. These values are designed to attribute the difference between a model's prediction and an average baseline to the different features used as input to the model. Being based on solid game-theoretic principles, Shapley values uniquely satisfy several desirable properties, which is why they are increasingly used to explain the predictions of possibly complex and highly non-linear machine learning models. Shapley values are well calibrated to a user’s intuition when features are independent, but may lead to undesirable, counterintuitive explanations when the independence assumption is violated.
Cite
Text
Heskes et al. "Causal Shapley Values: Exploiting Causal Knowledge to Explain Individual Predictions of Complex Models." Neural Information Processing Systems, 2020.Markdown
[Heskes et al. "Causal Shapley Values: Exploiting Causal Knowledge to Explain Individual Predictions of Complex Models." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/heskes2020neurips-causal/)BibTeX
@inproceedings{heskes2020neurips-causal,
title = {{Causal Shapley Values: Exploiting Causal Knowledge to Explain Individual Predictions of Complex Models}},
author = {Heskes, Tom and Sijben, Evi and Bucur, Ioan Gabriel and Claassen, Tom},
booktitle = {Neural Information Processing Systems},
year = {2020},
url = {https://mlanthology.org/neurips/2020/heskes2020neurips-causal/}
}