Logical Distillation of Graph Neural Networks

Abstract

We distill a symbolic model from a Graph Neural Network (GNN). Recent results have shown connections between the expressivity of GNNs and the two-variable fragment of first-order logic with counting quantifiers C2. We use decision trees to represent formulas in an extension of C2 and present an algorithm to distill such decision trees from a given GNN model. We test our approach on multiple GNN architectures. The distilled models are interpretable, succinct, and attain similar accuracy to the underlying GNN. Furthermore, when the ground truth is expressible in C2, our approach outperforms the GNN.

Cite

Text

Pluska et al. "Logical Distillation of Graph Neural Networks." ICML 2024 Workshops: MI, 2024.

Markdown

[Pluska et al. "Logical Distillation of Graph Neural Networks." ICML 2024 Workshops: MI, 2024.](https://mlanthology.org/icmlw/2024/pluska2024icmlw-logical/)

BibTeX

@inproceedings{pluska2024icmlw-logical,
  title     = {{Logical Distillation of Graph Neural Networks}},
  author    = {Pluska, Alexander and Welke, Pascal and Gärtner, Thomas and Malhotra, Sagar},
  booktitle = {ICML 2024 Workshops: MI},
  year      = {2024},
  url       = {https://mlanthology.org/icmlw/2024/pluska2024icmlw-logical/}
}