Self-Explainable Graph Transformer for Link Sign Prediction

Abstract

Signed Graph Neural Networks (SGNNs) have been shown to be effective in analyzing complex patterns in real-world situations where positive and negative links coexist. However, SGNN models suffer from poor explainability, which limit their adoptions in critical scenarios that require understanding the rationale behind predictions. To the best of our knowledge, there is currently no research work on the explainability of the SGNN models. Our goal is to address the explainability of decision-making for the downstream task of link sign prediction specific to signed graph neural networks. Since post-hoc explanations are not derived directly from the models, they may be biased and misrepresent the true explanations. Therefore, in this paper we introduce a Self-Explainable Signed Graph transformer (SE-SGformer) framework, which can not only outputs explainable information while ensuring high prediction accuracy. Specifically, we propose a new Transformer architecture for signed graphs and theoretically demonstrate that using positional encoding based on signed random walks has greater expressive power than current SGNN methods and other positional encoding graph Transformer-based approaches. We construct a novel explainable decision process by discovering the K-nearest (farthest) positive (negative) neighbors of a node to replace the neural network-based decoder for predicting edge signs. These K positive (negative) neighbors represent crucial information about the formation of positive (negative) edges between nodes and thus can serve as important explanatory information in the decision-making process. We conducted experiments on several real-world datasets to validate the effectiveness of SE-SGformer, which outperforms the state-of-the-art methods by improving 2.2% prediction accuracy and 73.1% explainablity accuracy in the best-case scenario.

Cite

Text

Li et al. "Self-Explainable Graph Transformer for Link Sign Prediction." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I11.33316

Markdown

[Li et al. "Self-Explainable Graph Transformer for Link Sign Prediction." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/li2025aaai-self-a/) doi:10.1609/AAAI.V39I11.33316

BibTeX

@inproceedings{li2025aaai-self-a,
  title     = {{Self-Explainable Graph Transformer for Link Sign Prediction}},
  author    = {Li, Lu and Liu, Jiale and Ji, Xingyu and Wang, Maojun and Zhang, Zeyu},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2025},
  pages     = {12084-12092},
  doi       = {10.1609/AAAI.V39I11.33316},
  url       = {https://mlanthology.org/aaai/2025/li2025aaai-self-a/}
}