GRAND++: Graph Neural Diffusion with a Source Term
Abstract
We propose GRAph Neural Diffusion with a source term (GRAND++) for graph deep learning with a limited number of labeled nodes, i.e., low-labeling rate. GRAND++ is a class of continuous-depth graph deep learning architectures whose theoretical underpinning is the diffusion process on graphs with a source term. The source term guarantees two interesting theoretical properties of GRAND++: (i) the representation of graph nodes, under the dynamics of GRAND++, will not converge to a constant vector over all nodes even as the time goes to infinity, which mitigates the over-smoothing issue of graph neural networks and enables graph learning in very deep architectures. (ii) GRAND++ can provide accurate classification even when the model is trained with a very limited number of labeled training data. We experimentally verify the above two advantages on various graph deep learning benchmark tasks, showing a significant improvement over many existing graph neural networks.
Cite
Text
Thorpe et al. "GRAND++: Graph Neural Diffusion with a Source Term." International Conference on Learning Representations, 2022.Markdown
[Thorpe et al. "GRAND++: Graph Neural Diffusion with a Source Term." International Conference on Learning Representations, 2022.](https://mlanthology.org/iclr/2022/thorpe2022iclr-grand/)BibTeX
@inproceedings{thorpe2022iclr-grand,
title = {{GRAND++: Graph Neural Diffusion with a Source Term}},
author = {Thorpe, Matthew and Nguyen, Tan Minh and Xia, Hedi and Strohmer, Thomas and Bertozzi, Andrea and Osher, Stanley and Wang, Bao},
booktitle = {International Conference on Learning Representations},
year = {2022},
url = {https://mlanthology.org/iclr/2022/thorpe2022iclr-grand/}
}