Covered Forest: Fine-Grained Generalization Analysis of Graph Neural Networks
Abstract
The expressive power of message-passing graph neural networks (MPNNs) is reasonably well understood, primarily through combinatorial techniques from graph isomorphism testing. However, MPNNs’ generalization abilities—making meaningful predictions beyond the training set—remain less explored. Current generalization analyses often overlook graph structure, limit the focus to specific aggregation functions, and assume the impractical, hard-to-optimize $0$-$1$ loss function. Here, we extend recent advances in graph similarity theory to assess the influence of graph structure, aggregation, and loss functions on MPNNs’ generalization abilities. Our empirical study supports our theoretical insights, improving our understanding of MPNNs’ generalization properties.
Cite
Text
Vasileiou et al. "Covered Forest: Fine-Grained Generalization Analysis of Graph Neural Networks." Proceedings of the 42nd International Conference on Machine Learning, 2025.Markdown
[Vasileiou et al. "Covered Forest: Fine-Grained Generalization Analysis of Graph Neural Networks." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/vasileiou2025icml-covered/)BibTeX
@inproceedings{vasileiou2025icml-covered,
title = {{Covered Forest: Fine-Grained Generalization Analysis of Graph Neural Networks}},
author = {Vasileiou, Antonis and Finkelshtein, Ben and Geerts, Floris and Levie, Ron and Morris, Christopher},
booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
year = {2025},
pages = {60984-61034},
volume = {267},
url = {https://mlanthology.org/icml/2025/vasileiou2025icml-covered/}
}