MLPInit: Embarrassingly Simple GNN Training Acceleration with MLP Initialization

Abstract

Training graph neural networks (GNNs) on large graphs is complex and extremely time consuming. This is attributed to overheads caused by sparse matrix multiplication, which are sidestepped when training multi-layer perceptrons (MLPs) with only node features. MLPs, by ignoring graph context, are simple and faster for graph data, however they usually sacrifice prediction accuracy, limiting their applications for graph data. We observe that for most message passing-based GNNs, we can trivially derive an analog MLP (we call this a PeerMLP) with an equivalent weight space, by setting the trainable parameters with the same shapes, making us curious about how do GNNs using weights from a fully trained PeerMLP perform? Surprisingly, we find that GNNs initialized with such weights significantly outperform their PeerMLPs, motivating us to use PeerMLP training as a precursor, initialization step to GNN training. To this end, we propose an embarrassingly simple, yet hugely effective initialization method for GNN training acceleration, called \mlpinit. Our extensive experiments on multiple large-scale graph datasets with diverse GNN architectures validate that MLPInit can accelerate the training of GNNs (up to 33× speedup on OGB-Products) and often improve prediction performance (e.g., up to $7.97\%$ improvement for GraphSAGE across $7$ datasets for node classification, and up to $17.81\%$ improvement across $4$ datasets for link prediction on metric Hits@10). The code is available at https://github.com/snap-research/MLPInit-for-GNNs.

Cite

Text

Han et al. "MLPInit: Embarrassingly Simple GNN Training Acceleration with MLP Initialization." International Conference on Learning Representations, 2023.

Markdown

[Han et al. "MLPInit: Embarrassingly Simple GNN Training Acceleration with MLP Initialization." International Conference on Learning Representations, 2023.](https://mlanthology.org/iclr/2023/han2023iclr-mlpinit/)

BibTeX

@inproceedings{han2023iclr-mlpinit,
  title     = {{MLPInit: Embarrassingly Simple GNN Training Acceleration with MLP Initialization}},
  author    = {Han, Xiaotian and Zhao, Tong and Liu, Yozen and Hu, Xia and Shah, Neil},
  booktitle = {International Conference on Learning Representations},
  year      = {2023},
  url       = {https://mlanthology.org/iclr/2023/han2023iclr-mlpinit/}
}