What Does My GNN Really Capture? on Exploring Internal GNN Representations
Abstract
Graph Neural Networks (GNNs) are very efficient at classifying graphs but their internal functioning is opaque which limits their field of application. Existing methods to explain GNN focus on disclosing the relationships between input graphs and model decision. In this article, we propose a method that goes further and isolates the internal features, hidden in the network layers, that are automatically identified by the GNN and used in the decision process. We show that this method makes possible to know the parts of the input graphs used by GNN with much less bias that SOTA methods and thus to bring confidence in the decision process.
Cite
Text
Veyrin-Forrer et al. "What Does My GNN Really Capture? on Exploring Internal GNN Representations." International Joint Conference on Artificial Intelligence, 2022. doi:10.24963/IJCAI.2022/105Markdown
[Veyrin-Forrer et al. "What Does My GNN Really Capture? on Exploring Internal GNN Representations." International Joint Conference on Artificial Intelligence, 2022.](https://mlanthology.org/ijcai/2022/veyrinforrer2022ijcai-my/) doi:10.24963/IJCAI.2022/105BibTeX
@inproceedings{veyrinforrer2022ijcai-my,
title = {{What Does My GNN Really Capture? on Exploring Internal GNN Representations}},
author = {Veyrin-Forrer, Luca and Kamal, Ataollah and Duffner, Stefan and Plantevit, Marc and Robardet, Céline},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2022},
pages = {747-752},
doi = {10.24963/IJCAI.2022/105},
url = {https://mlanthology.org/ijcai/2022/veyrinforrer2022ijcai-my/}
}