Rethinking Graph Regularization for Graph Neural Networks

Abstract

The graph Laplacian regularization term is usually used in semi-supervised representation learning to provide graph structure information for a model f(X). However, with the recent popularity of graph neural networks (GNNs), directly encoding graph structure A into a model, i.e., f(A, X), has become the more common approach. While we show that graph Laplacian regularization brings little-to-no benefit to existing GNNs, and propose a simple but non-trivial variant of graph Laplacian regularization, called Propagation-regularization (P-reg), to boost the performance of existing GNN models. We provide formal analyses to show that P-reg not only infuses extra information (that is not captured by the traditional graph Laplacian regularization) into GNNs, but also has the capacity equivalent to an infinite-depth graph convolutional network. We demonstrate that P-reg can effectively boost the performance of existing GNN models on both node-level and graph-level tasks across many different datasets.

Cite

Text

Yang et al. "Rethinking Graph Regularization for Graph Neural Networks." AAAI Conference on Artificial Intelligence, 2021. doi:10.1609/AAAI.V35I5.16586

Markdown

[Yang et al. "Rethinking Graph Regularization for Graph Neural Networks." AAAI Conference on Artificial Intelligence, 2021.](https://mlanthology.org/aaai/2021/yang2021aaai-rethinking/) doi:10.1609/AAAI.V35I5.16586

BibTeX

@inproceedings{yang2021aaai-rethinking,
  title     = {{Rethinking Graph Regularization for Graph Neural Networks}},
  author    = {Yang, Han and Ma, Kaili and Cheng, James},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2021},
  pages     = {4573-4581},
  doi       = {10.1609/AAAI.V35I5.16586},
  url       = {https://mlanthology.org/aaai/2021/yang2021aaai-rethinking/}
}