Score-Based Explainability for Graph Representations

Abstract

Despite the widespread use of unsupervised Graph Neural Networks (GNNs), their post-hoc explainability remains underexplored. Current graph explanation methods typically focus on explaining a single dimension of the final output. However, unsupervised and self-supervised GNNs produce d-dimensional representation vectors whose individual elements lack clear, disentangled semantic meaning. To tackle this issue, we draw inspiration from the success of score-based graph explainers in supervised GNNs and propose a novel framework, grXAI, for graph representation explainability. grXAI generalizes existing score-based graph explainers to identify the subgraph most responsible for constructing the latent representation of the input graph. This framework can be easily and efficiently implemented as a wrapper around existing methods, enabling the explanation of graph representations through connected subgraphs, which are more human-intelligible. Extensive qualitative and quantitative experiments demonstrate grXAI's strong ability to identify subgraphs that effectively explain learned graph representations across various unsupervised tasks and learning algorithms.

Cite

Text

Hajiramezanali et al. "Score-Based Explainability for Graph Representations." Transactions on Machine Learning Research, 2024.

Markdown

[Hajiramezanali et al. "Score-Based Explainability for Graph Representations." Transactions on Machine Learning Research, 2024.](https://mlanthology.org/tmlr/2024/hajiramezanali2024tmlr-scorebased/)

BibTeX

@article{hajiramezanali2024tmlr-scorebased,
  title     = {{Score-Based Explainability for Graph Representations}},
  author    = {Hajiramezanali, Ehsan and Maleki, Sepideh and Shen, Max W and Chuang, Kangway V. and Biancalani, Tommaso and Scalia, Gabriele},
  journal   = {Transactions on Machine Learning Research},
  year      = {2024},
  url       = {https://mlanthology.org/tmlr/2024/hajiramezanali2024tmlr-scorebased/}
}