Don't Stack Layers in Graph Neural Networks, Wire Them Randomly
Abstract
Several results suggest an inherent difficulty of graph neural networks in extracting better performance by increasing the number of layers. Recent works attribute this to a phenomenon peculiar to the extraction of node features in graph-based tasks, i.e., the need to consider multiple neighborhood sizes at the same time and adaptively tune them. In this paper, we investigate the recently proposed randomly wired architectures in the context of graph neural networks. Instead of building deeper networks by stacking many layers, we prove that employing a randomly-wired architecture can be a more effective way to increase the capacity of the network and obtain richer representations. We show that such architectures behave like an ensemble of paths, which are able to merge contributions from receptive fields of varied size. Moreover, these receptive fields can also be modulated to be wider or narrower through the trainable weights over the paths.
Cite
Text
Valsesia et al. "Don't Stack Layers in Graph Neural Networks, Wire Them Randomly." ICLR 2021 Workshops: GTRL, 2021.Markdown
[Valsesia et al. "Don't Stack Layers in Graph Neural Networks, Wire Them Randomly." ICLR 2021 Workshops: GTRL, 2021.](https://mlanthology.org/iclrw/2021/valsesia2021iclrw-don/)BibTeX
@inproceedings{valsesia2021iclrw-don,
title = {{Don't Stack Layers in Graph Neural Networks, Wire Them Randomly}},
author = {Valsesia, Diego and Fracastoro, Giulia and Magli, Enrico},
booktitle = {ICLR 2021 Workshops: GTRL},
year = {2021},
url = {https://mlanthology.org/iclrw/2021/valsesia2021iclrw-don/}
}