Evaluating Explainability Techniques on Discrete-Time Graph Neural Networks
Abstract
Discrete-time temporal Graph Neural Networks (GNNs) are powerful tools for modeling evolving graph-structured data and are widely used in decision-making processes across domains such as social network analysis, financial systems, and collaboration networks. Explaining the predictions of these models is an important research area due to the critical role their decisions play in building trust in social or financial systems. However, the explainability of Temporal Graph Neural Networks remains a challenging and relatively unexplored field. Hence, in this work, we propose a novel framework to evaluate explainability techniques tailored for discrete-time temporal GNNs. Our framework introduces new training and evaluation settings that capture the evolving nature of temporal data, defines metrics to assess the temporal aspects of explanations, and establishes baselines and models specific to discrete-time temporal networks. Through extensive experiments, we outline the best explainability techniques for discrete-time GNNs in terms of fidelity, efficiency, and human-readability trade-offs. By addressing the unique challenges of temporal graph data, our framework sets the stage for future advancements in explaining discrete-time GNNs.
Cite
Text
Dileo et al. "Evaluating Explainability Techniques on Discrete-Time Graph Neural Networks." Transactions on Machine Learning Research, 2025.Markdown
[Dileo et al. "Evaluating Explainability Techniques on Discrete-Time Graph Neural Networks." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/dileo2025tmlr-evaluating/)BibTeX
@article{dileo2025tmlr-evaluating,
title = {{Evaluating Explainability Techniques on Discrete-Time Graph Neural Networks}},
author = {Dileo, Manuel and Zignani, Matteo and Gaito, Sabrina Tiziana},
journal = {Transactions on Machine Learning Research},
year = {2025},
url = {https://mlanthology.org/tmlr/2025/dileo2025tmlr-evaluating/}
}