Faithful and Accurate Self-Attention Attribution for Message Passing Neural Networks via the Computation Tree Viewpoint

Abstract

The self-attention mechanism has been adopted in various popular message passing neural networks (MPNNs), enabling the model to adaptively control the amount of information that flows along the edges of the underlying graph. Such attention-based MPNNs (Att-GNNs) have also been used as a baseline for multiple studies on explainable AI (XAI) since attention has steadily been seen as natural model interpretations, while being a viewpoint that has already been popularized in other domains (e.g., natural language processing and computer vision). However, existing studies often use naive calculations to derive attribution scores from attention, undermining the potential of attention as interpretations for Att-GNNs. In our study, we aim to fill the gap between the widespread usage of Att-GNNs and their potential explainability via attention. To this end, we propose GAtt, edge attribution calculation method for self-attention MPNNs based on the computation tree, a rooted tree that reflects the computation process of the underlying model. Despite its simplicity, we empirically demonstrate the effectiveness of GAtt in three aspects of model explanation: faithfulness, explanation accuracy, and case studies by using both synthetic and real-world benchmark datasets. In all cases, the results demonstrate that GAtt greatly improves edge attribution scores, especially compared to the previous naive approach.

Cite

Text

Shin et al. "Faithful and Accurate Self-Attention Attribution for Message Passing Neural Networks via the Computation Tree Viewpoint." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I19.34254

Markdown

[Shin et al. "Faithful and Accurate Self-Attention Attribution for Message Passing Neural Networks via the Computation Tree Viewpoint." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/shin2025aaai-faithful/) doi:10.1609/AAAI.V39I19.34254

BibTeX

@inproceedings{shin2025aaai-faithful,
  title     = {{Faithful and Accurate Self-Attention Attribution for Message Passing Neural Networks via the Computation Tree Viewpoint}},
  author    = {Shin, Yong-Min and Li, Siqing and Cao, Xin and Shin, Won-Yong},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2025},
  pages     = {20461-20469},
  doi       = {10.1609/AAAI.V39I19.34254},
  url       = {https://mlanthology.org/aaai/2025/shin2025aaai-faithful/}
}