Rethinking Message Passing for Algorithmic Alignment
Abstract
Most Graph Neural Networks are based on the principle of message-passing, where all neighboring nodes exchange messages with each other simultaneously. We want to challenge this paradigm by introducing the Flood and Echo Net, a novel architecture that aligns neural computation with the principles of distributed algorithms. In our method, nodes sparsely activate upon receiving a message, leading to a wave-like activation pattern that traverses the graph. Through these sparse but parallel activations, the Net becomes more expressive than traditional MPNNs which are limited by the 1-WL test and also is provably more efficient in terms of message complexity. Moreover, the mechanism's ability to generalize across graphs of varying sizes positions it as a practical architecture for the task of algorithmic learning. We test the Flood and Echo Net on a variety of synthetic tasks and find that the algorithmic alignment of the execution improves generalization to larger graph sizes.
Cite
Text
Mathys et al. "Rethinking Message Passing for Algorithmic Alignment." NeurIPS 2024 Workshops: NeurReps, 2024.Markdown
[Mathys et al. "Rethinking Message Passing for Algorithmic Alignment." NeurIPS 2024 Workshops: NeurReps, 2024.](https://mlanthology.org/neuripsw/2024/mathys2024neuripsw-rethinking/)BibTeX
@inproceedings{mathys2024neuripsw-rethinking,
title = {{Rethinking Message Passing for Algorithmic Alignment}},
author = {Mathys, Joël and Grötschla, Florian and Nadimpalli, Kalyan Varma and Wattenhofer, Roger},
booktitle = {NeurIPS 2024 Workshops: NeurReps},
year = {2024},
url = {https://mlanthology.org/neuripsw/2024/mathys2024neuripsw-rethinking/}
}