Towards Characterizing the Value of Edge Embeddings in Graph Neural Networks
Abstract
Graph neural networks (GNNs) are the dominant approach to solving machine learning problems defined over graphs. Despite much theoretical and empirical work in recent years, our understanding of finer-grained aspects of architectural design for GNNs remains impoverished. In this paper, we consider the benefits of architectures that maintain and update edge embeddings. On the theoretical front, under a suitable computational abstraction for a layer in the model, as well as memory constraints on the embeddings, we show that there are natural tasks on graphical models for which architectures leveraging edge embeddings can be much shallower. Our techniques are inspired by results on time-space tradeoffs in theoretical computer science. Empirically, we show architectures that maintain edge embeddings almost always improve on their node-based counterparts—frequently significantly so in topologies that have "hub" nodes.
Cite
Text
Rohatgi et al. "Towards Characterizing the Value of Edge Embeddings in Graph Neural Networks." Proceedings of the 42nd International Conference on Machine Learning, 2025.Markdown
[Rohatgi et al. "Towards Characterizing the Value of Edge Embeddings in Graph Neural Networks." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/rohatgi2025icml-characterizing/)BibTeX
@inproceedings{rohatgi2025icml-characterizing,
title = {{Towards Characterizing the Value of Edge Embeddings in Graph Neural Networks}},
author = {Rohatgi, Dhruv and Marwah, Tanya and Lipton, Zachary Chase and Lu, Jianfeng and Moitra, Ankur and Risteski, Andrej},
booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
year = {2025},
pages = {51905-51923},
volume = {267},
url = {https://mlanthology.org/icml/2025/rohatgi2025icml-characterizing/}
}