Layer-Neighbor Sampling --- Defusing Neighborhood Explosion in GNNs
Abstract
Graph Neural Networks (GNNs) have received significant attention recently, but training them at a large scale remains a challenge.Mini-batch training coupled with sampling is used to alleviate this challenge.However, existing approaches either suffer from the neighborhood explosion phenomenon or have suboptimal performance. To address these issues, we propose a new sampling algorithm called LAyer-neighBOR sampling (LABOR). It is designed to be a direct replacement for Neighbor Sampling (NS) with the same fanout hyperparameter while sampling up to 7 times fewer vertices, without sacrificing quality.By design, the variance of the estimator of each vertex matches NS from the point of view of a single vertex.Moreover, under the same vertex sampling budget constraints, LABOR converges faster than existing layer sampling approaches and can use up to 112 times larger batch sizes compared to NS.
Cite
Text
Balin and Çatalyürek. "Layer-Neighbor Sampling --- Defusing Neighborhood Explosion in GNNs." Neural Information Processing Systems, 2023.Markdown
[Balin and Çatalyürek. "Layer-Neighbor Sampling --- Defusing Neighborhood Explosion in GNNs." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/balin2023neurips-layerneighbor/)BibTeX
@inproceedings{balin2023neurips-layerneighbor,
title = {{Layer-Neighbor Sampling --- Defusing Neighborhood Explosion in GNNs}},
author = {Balin, Muhammed Fatih and Çatalyürek, Ümit},
booktitle = {Neural Information Processing Systems},
year = {2023},
url = {https://mlanthology.org/neurips/2023/balin2023neurips-layerneighbor/}
}