GraphMix: Improved Training of GNNs for Semi-Supervised Learning

Abstract

We present GraphMix, a regularization method for Graph Neural Network based semi-supervised object classification, whereby we propose to train a fully-connected network jointly with the graph neural network via parameter sharing and interpolation-based regularization. Further, we provide a theoretical analysis of how GraphMix improves the generalization bounds of the underlying graph neural network, without making any assumptions about the "aggregation" layer or the depth of the graph neural networks. We experimentally validate this analysis by applying GraphMix to various architectures such as Graph Convolutional Networks, Graph Attention Networks and Graph-U-Net. Despite its simplicity, we demonstrate that GraphMix can consistently improve or closely match state-of-the-art performance using even simpler architectures such as Graph Convolutional Networks, across three established graph benchmarks: Cora, Citeseer and Pubmed citation network datasets, as well as three newly proposed datasets: Cora-Full, Co-author-CS and Co-author-Physics.

Cite

Text

Verma et al. "GraphMix: Improved Training of GNNs for Semi-Supervised Learning." AAAI Conference on Artificial Intelligence, 2021. doi:10.1609/AAAI.V35I11.17203

Markdown

[Verma et al. "GraphMix: Improved Training of GNNs for Semi-Supervised Learning." AAAI Conference on Artificial Intelligence, 2021.](https://mlanthology.org/aaai/2021/verma2021aaai-graphmix/) doi:10.1609/AAAI.V35I11.17203

BibTeX

@inproceedings{verma2021aaai-graphmix,
  title     = {{GraphMix: Improved Training of GNNs for Semi-Supervised Learning}},
  author    = {Verma, Vikas and Qu, Meng and Kawaguchi, Kenji and Lamb, Alex and Bengio, Yoshua and Kannala, Juho and Tang, Jian},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2021},
  pages     = {10024-10032},
  doi       = {10.1609/AAAI.V35I11.17203},
  url       = {https://mlanthology.org/aaai/2021/verma2021aaai-graphmix/}
}