Non-Backtracking Graph Neural Networks

Abstract

The celebrated message-passing updates for graph neural networks allow the representation of large-scale graphs with local and computationally tractable updates. However, the local updates suffer from backtracking, i.e., a message flows through the same edge twice and revisits the previously visited node. Since the number of message flows increases exponentially with the number of updates, the redundancy in local updates prevents the graph neural network from accurately recognizing a particular message flow for downstream tasks. In this work, we propose to resolve such a redundancy via the non-backtracking graph neural network (NBA-GNN) that updates a message without incorporating the message from the previously visited node. We further investigate how NBA-GNN alleviates the over-squashing of GNNs, and establish a connection between NBA-GNN and the impressive performance of non-backtracking updates for stochastic block model recovery. We empirically verify the effectiveness of our NBA-GNN on long-range graph benchmark and transductive node classification problems.

Cite

Text

Park et al. "Non-Backtracking Graph Neural Networks." NeurIPS 2023 Workshops: GLFrontiers, 2023.

Markdown

[Park et al. "Non-Backtracking Graph Neural Networks." NeurIPS 2023 Workshops: GLFrontiers, 2023.](https://mlanthology.org/neuripsw/2023/park2023neuripsw-nonbacktracking/)

BibTeX

@inproceedings{park2023neuripsw-nonbacktracking,
  title     = {{Non-Backtracking Graph Neural Networks}},
  author    = {Park, Seonghyun and Ryu, Narae and Kim, Gahee and Woo, Dongyeop and Yun, Se-Young and Ahn, Sungsoo},
  booktitle = {NeurIPS 2023 Workshops: GLFrontiers},
  year      = {2023},
  url       = {https://mlanthology.org/neuripsw/2023/park2023neuripsw-nonbacktracking/}
}