DiP-GNN: Discriminative Pre-Training of Graph Neural Networks
Abstract
Graph neural network (GNN) pre-training methods have been proposed to enhance the power of GNNs. Specifically, a GNN is first pre-trained on a large-scale unlabeled graph and then fine-tuned on a separate small labeled graph for downstream applications, such as node classification. One popular pre-training method is to mask out a proportion of the edges, and a GNN is trained to recover them. However, such a generative method suffers from graph mismatch. That is, the masked graph input to the GNN deviates from the original graph. To alleviate this issue, we propose DiP-GNN (Discriminative Pre-training of Graph Neural Networks). Specifically, we train a generator to recover identities of the masked edges, and simultaneously, we train a discriminator to distinguish the generated edges from the original graph's edges. The discriminator is subsequently used for downstream fine-tuning. In our pre-training framework, the graph seen by the discriminator better matches the original graph because the generator can recover a proportion of the masked edges. Extensive experiments on large-scale homogeneous and heterogeneous graphs demonstrate the effectiveness of DiP-GNN. Our code will be publicly available.
Cite
Text
Zuo et al. "DiP-GNN: Discriminative Pre-Training of Graph Neural Networks." NeurIPS 2023 Workshops: GLFrontiers, 2023.Markdown
[Zuo et al. "DiP-GNN: Discriminative Pre-Training of Graph Neural Networks." NeurIPS 2023 Workshops: GLFrontiers, 2023.](https://mlanthology.org/neuripsw/2023/zuo2023neuripsw-dipgnn/)BibTeX
@inproceedings{zuo2023neuripsw-dipgnn,
title = {{DiP-GNN: Discriminative Pre-Training of Graph Neural Networks}},
author = {Zuo, Simiao and Jiang, Haoming and Yin, Qingyu and Tang, Xianfeng and Yin, Bing and Zhao, Tuo},
booktitle = {NeurIPS 2023 Workshops: GLFrontiers},
year = {2023},
url = {https://mlanthology.org/neuripsw/2023/zuo2023neuripsw-dipgnn/}
}