Graph Neural Networks Gone Hogwild

Abstract

Graph neural networks (GNNs) appear to be powerful tools to learn state representations for agents in distributed, decentralized multi-agent systems, but generate catastrophically incorrect predictions when nodes update asynchronously during inference. This failure under asynchrony effectively excludes these architectures from many potential applications where synchrony is difficult or impossible to enforce, e.g., robotic swarms or sensor networks. In this work we identify ''implicitly-defined'' GNNs as a class of architectures which is provably robust to asynchronous ''hogwild'' inference, adapting convergence guarantees from work in asynchronous and distributed optimization. We then propose a novel implicitly-defined GNN architecture, which we call an energy GNN. We show that this architecture outperforms other GNNs from this class on a variety of synthetic tasks inspired by multi-agent systems.

Cite

Text

Solodova et al. "Graph Neural Networks Gone Hogwild." International Conference on Learning Representations, 2025.

Markdown

[Solodova et al. "Graph Neural Networks Gone Hogwild." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/solodova2025iclr-graph/)

BibTeX

@inproceedings{solodova2025iclr-graph,
  title     = {{Graph Neural Networks Gone Hogwild}},
  author    = {Solodova, Olga and Richardson, Nick and Oktay, Deniz and Adams, Ryan P},
  booktitle = {International Conference on Learning Representations},
  year      = {2025},
  url       = {https://mlanthology.org/iclr/2025/solodova2025iclr-graph/}
}