SGFormer: Simplifying and Empowering Transformers for Large-Graph Representations

Abstract

Learning representations on large-sized graphs is a long-standing challenge due to the inter-dependence nature involved in massive data points. Transformers, as an emerging class of foundation encoders for graph-structured data, have shown promising performance on small graphs due to its global attention capable of capturing all-pair influence beyond neighboring nodes. Even so, existing approaches tend to inherit the spirit of Transformers in language and vision tasks, and embrace complicated models by stacking deep multi-head attentions. In this paper, we critically demonstrate that even using a one-layer attention can bring up surprisingly competitive performance across node property prediction benchmarks where node numbers range from thousand-level to billion-level. This encourages us to rethink the design philosophy for Transformers on large graphs, where the global attention is a computation overhead hindering the scalability. We frame the proposed scheme as Simplified Graph Transformers (SGFormer), which is empowered by a simple attention model that can efficiently propagate information among arbitrary nodes in one layer. SGFormer requires none of positional encodings, feature/graph pre-processing or augmented loss. Empirically, SGFormer successfully scales to the web-scale graph ogbn-papers100M and yields up to 141x inference acceleration over SOTA Transformers on medium-sized graphs. Beyond current results, we believe the proposed methodology alone enlightens a new technical path of independent interest for building Transformers on large graphs.

Cite

Text

Wu et al. "SGFormer: Simplifying and Empowering Transformers for Large-Graph Representations." Neural Information Processing Systems, 2023.

Markdown

[Wu et al. "SGFormer: Simplifying and Empowering Transformers for Large-Graph Representations." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/wu2023neurips-sgformer/)

BibTeX

@inproceedings{wu2023neurips-sgformer,
  title     = {{SGFormer: Simplifying and Empowering Transformers for Large-Graph Representations}},
  author    = {Wu, Qitian and Zhao, Wentao and Yang, Chenxiao and Zhang, Hengrui and Nie, Fan and Jiang, Haitian and Bian, Yatao and Yan, Junchi},
  booktitle = {Neural Information Processing Systems},
  year      = {2023},
  url       = {https://mlanthology.org/neurips/2023/wu2023neurips-sgformer/}
}