SpeqNets: Sparsity-Aware Permutation-Equivariant Graph Networks

Abstract

While graph neural networks have clear limitations in approximating permutation-equivariant functions over graphs, more expressive, higher-order graph neural networks do not scale to large graphs. By introducing new heuristics for the graph isomorphism problem, we devise a class of universal, permutation-equivariant graph networks, which offers a fine-grained control between expressivity and scalability and adapt to the sparsity of the graph. These architectures lead to vastly reduced computation times compared to standard higher-order graph networks while significantly improving over standard graph neural network and graph kernel architectures in terms of predictive performance.

Cite

Text

Morris et al. "SpeqNets: Sparsity-Aware Permutation-Equivariant Graph Networks." ICLR 2022 Workshops: GTRL, 2022.

Markdown

[Morris et al. "SpeqNets: Sparsity-Aware Permutation-Equivariant Graph Networks." ICLR 2022 Workshops: GTRL, 2022.](https://mlanthology.org/iclrw/2022/morris2022iclrw-speqnets/)

BibTeX

@inproceedings{morris2022iclrw-speqnets,
  title     = {{SpeqNets: Sparsity-Aware Permutation-Equivariant Graph Networks}},
  author    = {Morris, Christopher and Rattan, Gaurav and Kiefer, Sandra and Ravanbakhsh, Siamak},
  booktitle = {ICLR 2022 Workshops: GTRL},
  year      = {2022},
  url       = {https://mlanthology.org/iclrw/2022/morris2022iclrw-speqnets/}
}