GIST: Distributed Training for Large-Scale Graph Convolutional Networks

Abstract

The graph convolutional network (GCN) is a go-to solution for machine learning on graphs, but its training is notoriously difficult to scale both in terms of graph size and the number of model parameters. Although some work has explored training on large-scale graphs, we pioneer efficient training of large-scale GCN models with the proposal of a novel, distributed training framework, called \texttt{GIST}. \texttt{GIST} disjointly partitions the parameters of a GCN model into several, smaller sub-GCNs that are trained independently and in parallel. Compatible with all GCN architectures and existing sampling techniques, \texttt{GIST} $i)$ improves model performance, $ii)$ scales to training on arbitrarily large graphs, $iii)$ decreases wall-clock training time, and $iv)$ enables the training of markedly overparameterized GCN models. Remarkably, with \texttt{GIST}, we train an astonishgly-wide $32,\!768$-dimensional GraphSAGE model, which exceeds the capacity of a single GPU by a factor of $8\times$, to SOTA performance on the Amazon2M dataset.

Cite

Text

Wolfe et al. "GIST: Distributed Training for Large-Scale Graph Convolutional Networks." NeurIPS 2022 Workshops: GLFrontiers, 2022.

Markdown

[Wolfe et al. "GIST: Distributed Training for Large-Scale Graph Convolutional Networks." NeurIPS 2022 Workshops: GLFrontiers, 2022.](https://mlanthology.org/neuripsw/2022/wolfe2022neuripsw-gist/)

BibTeX

@inproceedings{wolfe2022neuripsw-gist,
  title     = {{GIST: Distributed Training for Large-Scale Graph Convolutional Networks}},
  author    = {Wolfe, Cameron R. and Yang, Jingkang and Liao, Fangshuo and Chowdhury, Arindam and Dun, Chen and Bayer, Artun and Segarra, Santiago and Kyrillidis, Anastasios},
  booktitle = {NeurIPS 2022 Workshops: GLFrontiers},
  year      = {2022},
  url       = {https://mlanthology.org/neuripsw/2022/wolfe2022neuripsw-gist/}
}