SA-MLP: Distilling Graph Knowledge from GNNs into Structure-Aware MLP

Abstract

The recursive node fetching and aggregation in message-passing cause inference latency when deploying Graph Neural Networks (GNNs) to large-scale graphs. One promising inference acceleration direction is to distill GNNs into message-passing-free student Multi-Layer Perceptrons (MLPs). However, the MLP student without graph dependency cannot fully learn the structure knowledge from GNNs, which causes inferior performance in heterophilic and online scenarios. To address this problem, we first design a simple yet effective Structure-Aware MLP (SA-MLP) as a student model. It utilizes linear layers as encoders and decoders to capture features and graph structures without message-passing among nodes. Furthermore, we introduce a novel structure-mixing knowledge distillation technique. It generates virtual samples imbued with a hybrid of structure knowledge from teacher GNNs, thereby enhancing the learning ability of MLPs for structure information. Extensive experiments on eight benchmark datasets under both transductive and online settings show that our SA-MLP can consistently achieve similar or even better results than teacher GNNs while maintaining as fast inference speed as MLPs. Our findings reveal that SA-MLP efficiently assimilates graph knowledge through distillation from GNNs in an end-to-end manner, eliminating the need for complex model architectures and preprocessing of features/structures. Our code is available at https://github.com/JC-202/SA-MLP.

Cite

Text

Chen et al. "SA-MLP: Distilling Graph Knowledge from GNNs into Structure-Aware MLP." Transactions on Machine Learning Research, 2024.

Markdown

[Chen et al. "SA-MLP: Distilling Graph Knowledge from GNNs into Structure-Aware MLP." Transactions on Machine Learning Research, 2024.](https://mlanthology.org/tmlr/2024/chen2024tmlr-samlp/)

BibTeX

@article{chen2024tmlr-samlp,
  title     = {{SA-MLP: Distilling Graph Knowledge from GNNs into Structure-Aware MLP}},
  author    = {Chen, Jie and Bai, Mingyuan and Chen, Shouzhen and Gao, Junbin and Zhang, Junping and Pu, Jian},
  journal   = {Transactions on Machine Learning Research},
  year      = {2024},
  url       = {https://mlanthology.org/tmlr/2024/chen2024tmlr-samlp/}
}