On the Consistency of GNN Explainability Methods
Abstract
Despite the widespread utilization of post-hoc explanation methods for graph neural networks (GNNs) in high-stakes settings, there has been a lack of comprehensive evaluation regarding their quality and reliability. This evaluation is challenging primarily due to non-Euclidean nature of the data, arbitrary size, and complex topological structure. In this context, we argue that the \emph{consistency} of GNN explanations, denoting the ability to produce similar explanations for input graphs with minor structural changes that do not alter their output predictions, is a key requirement for effective post-hoc GNN explanations. To fulfill this gap, we introduce a novel metric based on Fused Gromov-Wasserstein distance to quantify consistency.
Cite
Text
Hajiramezanali et al. "On the Consistency of GNN Explainability Methods." NeurIPS 2023 Workshops: GLFrontiers, 2023.Markdown
[Hajiramezanali et al. "On the Consistency of GNN Explainability Methods." NeurIPS 2023 Workshops: GLFrontiers, 2023.](https://mlanthology.org/neuripsw/2023/hajiramezanali2023neuripsw-consistency/)BibTeX
@inproceedings{hajiramezanali2023neuripsw-consistency,
title = {{On the Consistency of GNN Explainability Methods}},
author = {Hajiramezanali, Ehsan and Maleki, Sepideh and Tseng, Alex and BenTaieb, Aicha and Scalia, Gabriele and Biancalani, Tommaso},
booktitle = {NeurIPS 2023 Workshops: GLFrontiers},
year = {2023},
url = {https://mlanthology.org/neuripsw/2023/hajiramezanali2023neuripsw-consistency/}
}