Random Propagations in GNNs
Abstract
Graph learning benefits many fields. However, Graph Neural Networks (GNNs) often struggle with scalability, especially on large graphs. At the same time, many tasks seem to be simple in terms of learning, e.g., simple diffusion yields favorable performance. In this paper, we present Random Propagation GNN (RAP-GNN), a framework that addresses two main research questions: (i) can random propagations in GNNs be as effective as end-to-end optimized GNNs? and (ii) can they reduce the computational burden required by traditional GNNs? Our empirical findings indicate that RAP-GNN reduces training time by up to 58\%, while maintaining strong accuracy for node and graph classification tasks.
Cite
Text
Bui et al. "Random Propagations in GNNs." NeurIPS 2024 Workshops: UniReps, 2024.Markdown
[Bui et al. "Random Propagations in GNNs." NeurIPS 2024 Workshops: UniReps, 2024.](https://mlanthology.org/neuripsw/2024/bui2024neuripsw-random/)BibTeX
@inproceedings{bui2024neuripsw-random,
title = {{Random Propagations in GNNs}},
author = {Bui, Thu and Naman, Anugunj and Schönlieb, Carola-Bibiane and Ribeiro, Bruno and Bevilacqua, Beatrice and Eliasof, Moshe},
booktitle = {NeurIPS 2024 Workshops: UniReps},
year = {2024},
url = {https://mlanthology.org/neuripsw/2024/bui2024neuripsw-random/}
}