Implicit Graph Neural Networks

Abstract

Graph Neural Networks (GNNs) are widely used deep learning models that learn meaningful representations from graph-structured data. Due to the finite nature of the underlying recurrent structure, current GNN methods may struggle to capture long-range dependencies in underlying graphs. To overcome this difficulty, we propose a graph learning framework, called Implicit Graph Neural Networks (IGNN), where predictions are based on the solution of a fixed-point equilibrium equation involving implicitly defined "state" vectors. We use the Perron-Frobenius theory to derive sufficient conditions that ensure well-posedness of the framework. Leveraging implicit differentiation, we derive a tractable projected gradient descent method to train the framework. Experiments on a comprehensive range of tasks show that IGNNs consistently capture long-range dependencies and outperform state-of-the-art GNN models.

Cite

Text

Gu et al. "Implicit Graph Neural Networks." Neural Information Processing Systems, 2020.

Markdown

[Gu et al. "Implicit Graph Neural Networks." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/gu2020neurips-implicit/)

BibTeX

@inproceedings{gu2020neurips-implicit,
  title     = {{Implicit Graph Neural Networks}},
  author    = {Gu, Fangda and Chang, Heng and Zhu, Wenwu and Sojoudi, Somayeh and El Ghaoui, Laurent},
  booktitle = {Neural Information Processing Systems},
  year      = {2020},
  url       = {https://mlanthology.org/neurips/2020/gu2020neurips-implicit/}
}