Cooperative Graph Neural Networks
Abstract
Graph neural networks are popular architectures for graph machine learning, based on iterative computation of node representations of an input graph through a series of invariant transformations. A large class of graph neural networks follow a standard message-passing paradigm: at every layer, each node state is updated based on an aggregate of messages from its neighborhood. In this work, we propose a novel framework for training graph neural networks, where every node is viewed as a player that can choose to either listen, broadcast, listen and broadcast, or to isolate. The standard message propagation scheme can then be viewed as a special case of this framework where every node listens and broadcasts to all neighbors. Our approach offers a more flexible and dynamic message-passing paradigm, where each node can determine its own strategy based on their state, effectively exploring the graph topology while learning. We provide a theoretical analysis of the new message-passing scheme which is further supported by an extensive empirical analysis on a synthetic and real-world datasets.
Cite
Text
Finkelshtein et al. "Cooperative Graph Neural Networks." International Conference on Machine Learning, 2024.Markdown
[Finkelshtein et al. "Cooperative Graph Neural Networks." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/finkelshtein2024icml-cooperative/)BibTeX
@inproceedings{finkelshtein2024icml-cooperative,
title = {{Cooperative Graph Neural Networks}},
author = {Finkelshtein, Ben and Huang, Xingyue and Bronstein, Michael M. and Ceylan, Ismail Ilkan},
booktitle = {International Conference on Machine Learning},
year = {2024},
pages = {13633-13659},
volume = {235},
url = {https://mlanthology.org/icml/2024/finkelshtein2024icml-cooperative/}
}