Almost Surely Asymptotically Constant Graph Neural Networks
Abstract
We present a new angle on the expressive power of graph neural networks (GNNs) by studying how the predictions of real-valued GNN classifiers, such as those classifying graphs probabilistically, evolve as we apply them on larger graphs drawn from some random graph model. We show that the output converges to a constant function, which upper-bounds what these classifiers can uniformly express. This strong convergence phenomenon applies to a very wide class of GNNs, including state of the art models, with aggregates including mean and the attention-based mechanism of graph transformers. Our results apply to a broad class of random graph models, including sparse and dense variants of the Erdős-Rényi model, the stochastic block model, and the Barabási-Albert model. We empirically validate these findings, observing that the convergence phenomenon appears not only on random graphs but also on some real-world graphs.
Cite
Text
Adam-Day et al. "Almost Surely Asymptotically Constant Graph Neural Networks." Neural Information Processing Systems, 2024. doi:10.52202/079017-3965Markdown
[Adam-Day et al. "Almost Surely Asymptotically Constant Graph Neural Networks." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/adamday2024neurips-almost/) doi:10.52202/079017-3965BibTeX
@inproceedings{adamday2024neurips-almost,
title = {{Almost Surely Asymptotically Constant Graph Neural Networks}},
author = {Adam-Day, Sam and Benedikt, Michael and Ceylan, İsmail İlkan and Finkelshtein, Ben},
booktitle = {Neural Information Processing Systems},
year = {2024},
doi = {10.52202/079017-3965},
url = {https://mlanthology.org/neurips/2024/adamday2024neurips-almost/}
}