MGNNI: Multiscale Graph Neural Networks with Implicit Layers

Abstract

Recently, implicit graph neural networks (GNNs) have been proposed to capture long-range dependencies in underlying graphs. In this paper, we introduce and justify two weaknesses of implicit GNNs: the constrained expressiveness due to their limited effective range for capturing long-range dependencies, and their lack of ability to capture multiscale information on graphs at multiple resolutions. To show the limited effective range of previous implicit GNNs, we first provide a theoretical analysis and point out the intrinsic relationship between the effective range and the convergence of iterative equations used in these models. To mitigate the mentioned weaknesses, we propose a multiscale graph neural network with implicit layers (MGNNI) which is able to model multiscale structures on graphs and has an expanded effective range for capturing long-range dependencies. We conduct comprehensive experiments for both node classification and graph classification to show that MGNNI outperforms representative baselines and has a better ability for multiscale modeling and capturing of long-range dependencies.

Cite

Text

Liu et al. "MGNNI: Multiscale Graph Neural Networks with Implicit Layers." Neural Information Processing Systems, 2022.

Markdown

[Liu et al. "MGNNI: Multiscale Graph Neural Networks with Implicit Layers." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/liu2022neurips-mgnni/)

BibTeX

@inproceedings{liu2022neurips-mgnni,
  title     = {{MGNNI: Multiscale Graph Neural Networks with Implicit Layers}},
  author    = {Liu, Juncheng and Hooi, Bryan and Kawaguchi, Kenji and Xiao, Xiaokui},
  booktitle = {Neural Information Processing Systems},
  year      = {2022},
  url       = {https://mlanthology.org/neurips/2022/liu2022neurips-mgnni/}
}