On the Tractability of SHAP Explanations
Abstract
SHAP explanations are a popular feature-attribution mechanism for explainable AI. They use game-theoretic notions to measure the influence of individual features on the prediction of a machine learning model. Despite a lot of recent interest from both academia and industry, it is not known whether SHAP explanations of common machine learning models can be computed efficiently. In this paper, we establish the complexity of computing the SHAP explanation in three important settings. First, we consider fully-factorized data distributions, and show that the complexity of computing the SHAP explanation is the same as the complexity of computing the expected value of the model. This fully-factorized setting is often used to simplify the SHAP computation, yet our results show that the computation can be intractable for commonly used models such as logistic regression. Going beyond fully-factorized distributions, we show that computing SHAP explanations is already intractable for a very simple setting: computing SHAP explanations of trivial classifiers over naive Bayes distributions. Finally, we show that even computing SHAP over the empirical distribution is #P-hard.
Cite
Text
Van den Broeck et al. "On the Tractability of SHAP Explanations." AAAI Conference on Artificial Intelligence, 2021. doi:10.1609/AAAI.V35I7.16806Markdown
[Van den Broeck et al. "On the Tractability of SHAP Explanations." AAAI Conference on Artificial Intelligence, 2021.](https://mlanthology.org/aaai/2021/denbroeck2021aaai-tractability/) doi:10.1609/AAAI.V35I7.16806BibTeX
@inproceedings{denbroeck2021aaai-tractability,
title = {{On the Tractability of SHAP Explanations}},
author = {Van den Broeck, Guy and Lykov, Anton and Schleich, Maximilian and Suciu, Dan},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2021},
pages = {6505-6513},
doi = {10.1609/AAAI.V35I7.16806},
url = {https://mlanthology.org/aaai/2021/denbroeck2021aaai-tractability/}
}