GraphFM: Improving Large-Scale GNN Training via Feature Momentum
Abstract
Training of graph neural networks (GNNs) for large-scale node classification is challenging. A key difficulty lies in obtaining accurate hidden node representations while avoiding the neighborhood explosion problem. Here, we propose a new technique, named feature momentum (FM), that uses a momentum step to incorporate historical embeddings when updating feature representations. We develop two specific algorithms, known as GraphFM-IB and GraphFM-OB, that consider in-batch and out-of-batch data, respectively. GraphFM-IB applies FM to in-batch sampled data, while GraphFM-OB applies FM to out-of-batch data that are 1-hop neighborhood of in-batch data. We provide a convergence analysis for GraphFM-IB and some theoretical insight for GraphFM-OB. Empirically, we observe that GraphFM-IB can effectively alleviate the neighborhood explosion problem of existing methods. In addition, GraphFM-OB achieves promising performance on multiple large-scale graph datasets.
Cite
Text
Yu et al. "GraphFM: Improving Large-Scale GNN Training via Feature Momentum." International Conference on Machine Learning, 2022.Markdown
[Yu et al. "GraphFM: Improving Large-Scale GNN Training via Feature Momentum." International Conference on Machine Learning, 2022.](https://mlanthology.org/icml/2022/yu2022icml-graphfm/)BibTeX
@inproceedings{yu2022icml-graphfm,
title = {{GraphFM: Improving Large-Scale GNN Training via Feature Momentum}},
author = {Yu, Haiyang and Wang, Limei and Wang, Bokun and Liu, Meng and Yang, Tianbao and Ji, Shuiwang},
booktitle = {International Conference on Machine Learning},
year = {2022},
pages = {25684-25701},
volume = {162},
url = {https://mlanthology.org/icml/2022/yu2022icml-graphfm/}
}