Improving Graph Neural Networks with Learnable Propagation Operators
Abstract
Graph Neural Networks (GNNs) are limited in their propagation operators. In many cases, these operators often contain non-negative elements only and are shared across channels, limiting the expressiveness of GNNs. Moreover, some GNNs suffer from over-smoothing, limiting their depth. On the other hand, Convolutional Neural Networks (CNNs) can learn diverse propagation filters, and phenomena like over-smoothing are typically not apparent in CNNs. In this paper, we bridge these gaps by incorporating trainable channel-wise weighting factors $\omega$ to learn and mix multiple smoothing and sharpening propagation operators at each layer. Our generic method is called $\omega$GNN, and is easy to implement. We study two variants: $\omega$GCN and $\omega$GAT. For $\omega$GCN, we theoretically analyse its behaviour and the impact of $\omega$ on the obtained node features. Our experiments confirm these findings, demonstrating and explaining how both variants do not over-smooth. Additionally, we experiment with 15 real-world datasets on node- and graph-classification tasks, where our $\omega$GCN and $\omega$GAT perform on par with state-of-the-art methods.
Cite
Text
Eliasof et al. "Improving Graph Neural Networks with Learnable Propagation Operators." International Conference on Machine Learning, 2023.Markdown
[Eliasof et al. "Improving Graph Neural Networks with Learnable Propagation Operators." International Conference on Machine Learning, 2023.](https://mlanthology.org/icml/2023/eliasof2023icml-improving/)BibTeX
@inproceedings{eliasof2023icml-improving,
title = {{Improving Graph Neural Networks with Learnable Propagation Operators}},
author = {Eliasof, Moshe and Ruthotto, Lars and Treister, Eran},
booktitle = {International Conference on Machine Learning},
year = {2023},
pages = {9224-9245},
volume = {202},
url = {https://mlanthology.org/icml/2023/eliasof2023icml-improving/}
}