Towards Efficient Training of Graph Neural Networks: A Multiscale Approach

Abstract

Graph Neural Networks (GNNs) have become powerful tools for learning from graph-structured data, finding applications across diverse domains. However, as graph sizes and connectivity increase, standard GNN training methods face significant computational and memory challenges, limiting their scalability and efficiency. In this paper, we present a novel framework for efficient multiscale training of GNNs. Our approach leverages hierarchical graph representations and subgraphs, enabling the integration of information across multiple scales and resolutions. By utilizing coarser graph abstractions and subgraphs, each with fewer nodes and edges, we significantly reduce computational overhead during training. Building on this framework, we propose a suite of scalable training strategies, including coarse-to-fine learning, subgraph-to-full-graph transfer, and multiscale gradient computation. We also provide some theoretical analysis of our methods and demonstrate their effectiveness across various datasets and learning tasks. Our results show that multiscale training can substantially accelerate GNN training for large scale problems while maintaining, or even improving, predictive performance.

Cite

Text

Gal et al. "Towards Efficient Training of Graph Neural Networks: A Multiscale Approach." Transactions on Machine Learning Research, 2025.

Markdown

[Gal et al. "Towards Efficient Training of Graph Neural Networks: A Multiscale Approach." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/gal2025tmlr-efficient/)

BibTeX

@article{gal2025tmlr-efficient,
  title     = {{Towards Efficient Training of Graph Neural Networks: A Multiscale Approach}},
  author    = {Gal, Eshed and Eliasof, Moshe and Schönlieb, Carola-Bibiane and Kyrchei, Ivan and Haber, Eldad and Treister, Eran},
  journal   = {Transactions on Machine Learning Research},
  year      = {2025},
  url       = {https://mlanthology.org/tmlr/2025/gal2025tmlr-efficient/}
}