Interpreting Multivariate Shapley Interactions in DNNs
Abstract
This paper aims to explain deep neural networks (DNNs) from the perspective of multivariate interactions. In this paper, we define and quantify the significance of interactions among multiple input variables of the DNN. Input variables with strong interactions usually form a coalition and reflect prototype features, which are memorized and used by the DNN for inference. We define the significance of interactions based on the Shapley value, which is designed to assign the attribution value of each input variable to the inference. We have conducted experiments with various DNNs. Experimental results have demonstrated the effectiveness of the proposed method.
Cite
Text
Zhang et al. "Interpreting Multivariate Shapley Interactions in DNNs." AAAI Conference on Artificial Intelligence, 2021. doi:10.1609/AAAI.V35I12.17299Markdown
[Zhang et al. "Interpreting Multivariate Shapley Interactions in DNNs." AAAI Conference on Artificial Intelligence, 2021.](https://mlanthology.org/aaai/2021/zhang2021aaai-interpreting/) doi:10.1609/AAAI.V35I12.17299BibTeX
@inproceedings{zhang2021aaai-interpreting,
title = {{Interpreting Multivariate Shapley Interactions in DNNs}},
author = {Zhang, Hao and Xie, Yichen and Zheng, Longjie and Zhang, Die and Zhang, Quanshi},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2021},
pages = {10877-10886},
doi = {10.1609/AAAI.V35I12.17299},
url = {https://mlanthology.org/aaai/2021/zhang2021aaai-interpreting/}
}