Generative Causal Explanations for Graph Neural Networks
Abstract
This paper presents {\em Gem}, a model-agnostic approach for providing interpretable explanations for any GNNs on various graph learning tasks. Specifically, we formulate the problem of providing explanations for the decisions of GNNs as a causal learning task. Then we train a causal explanation model equipped with a loss function based on Granger causality. Different from existing explainers for GNNs, {\em Gem} explains GNNs on graph-structured data from a causal perspective. It has better generalization ability as it has no requirements on the internal structure of the GNNs or prior knowledge on the graph learning tasks. In addition, {\em Gem}, once trained, can be used to explain the target GNN very quickly. Our theoretical analysis shows that several recent explainers fall into a unified framework of {\em additive feature attribution methods}. Experimental results on synthetic and real-world datasets show that {\em Gem} achieves a relative increase of the explanation accuracy by up to $30%$ and speeds up the explanation process by up to $110\times$ as compared to its state-of-the-art alternatives.
Cite
Text
Lin et al. "Generative Causal Explanations for Graph Neural Networks." International Conference on Machine Learning, 2021.Markdown
[Lin et al. "Generative Causal Explanations for Graph Neural Networks." International Conference on Machine Learning, 2021.](https://mlanthology.org/icml/2021/lin2021icml-generative/)BibTeX
@inproceedings{lin2021icml-generative,
title = {{Generative Causal Explanations for Graph Neural Networks}},
author = {Lin, Wanyu and Lan, Hao and Li, Baochun},
booktitle = {International Conference on Machine Learning},
year = {2021},
pages = {6666-6679},
volume = {139},
url = {https://mlanthology.org/icml/2021/lin2021icml-generative/}
}