Sketch-Augmented Features Improve Learning Long-Range Dependencies in Graph Neural Networks

Abstract

Graph Neural Networks learn on graph-structured data by iteratively aggregating local neighborhood information. While this local message passing paradigm imparts a powerful inductive bias and exploits graph sparsity, it also yields three key challenges: (i) oversquashing of long-range information, (ii) oversmoothing of node representations, and (iii) limited expressive power. In this work we inject randomized global embeddings of node features, which we term Sketched Random Features, into standard GNNs, enabling them to efficiently capture long-range dependencies. The embeddings are unique, distance-sensitive, and topology-agnostic---properties which we analytically and empirically show alleviate the aforementioned limitations when injected into GNNs. Experimental results on real-world graph learning tasks confirm that this strategy consistently improves performance over baseline GNNs, offering both a standalone solution and a complementary enhancement to existing techniques such as graph positional encodings.

Cite

Text

Hosseini et al. "Sketch-Augmented Features Improve Learning Long-Range Dependencies in Graph Neural Networks." Advances in Neural Information Processing Systems, 2025.

Markdown

[Hosseini et al. "Sketch-Augmented Features Improve Learning Long-Range Dependencies in Graph Neural Networks." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/hosseini2025neurips-sketchaugmented/)

BibTeX

@inproceedings{hosseini2025neurips-sketchaugmented,
  title     = {{Sketch-Augmented Features Improve Learning Long-Range Dependencies in Graph Neural Networks}},
  author    = {Hosseini, Ryien and Simini, Filippo and Vishwanath, Venkatram and Willett, Rebecca and Hoffmann, Henry},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/hosseini2025neurips-sketchaugmented/}
}