DUALFormer: Dual Graph Transformer
Abstract
Graph Transformers (GTs), adept at capturing the locality and globality of graphs, have shown promising potential in node classification tasks. Most state-of-the-art GTs succeed through integrating local Graph Neural Networks (GNNs) with their global Self-Attention (SA) modules to enhance structural awareness. Nonetheless, this architecture faces limitations arising from scalability challenges and the trade-off between capturing local and global information. On the one hand, the quadratic complexity associated with the SA modules poses a significant challenge for many GTs, particularly when scaling them to large-scale graphs. Numerous GTs necessitated a compromise, relinquishing certain aspects of their expressivity to garner computational efficiency. On the other hand, GTs face challenges in maintaining detailed local structural information while capturing long-range dependencies. As a result, they typically require significant computational costs to balance the local and global expressivity. To address these limitations, this paper introduces a novel GT architecture, dubbed DUALFormer, featuring a dual-dimensional design of its GNN and SA modules. Leveraging approximation theory from Linearized Transformers and treating the query as the surrogate representation of node features, DUALFormer \emph{efficiently} performs the computationally intensive global SA module on feature dimensions. Furthermore, by such a separation of local and global modules into dual dimensions, DUALFormer achieves a natural balance between local and global expressivity. In theory, DUALFormer can reduce intra-class variance, thereby enhancing the discriminability of node representations. Extensive experiments on eleven real-world datasets demonstrate its effectiveness and efficiency over existing state-of-the-art GTs.
Cite
Text
Zhuo et al. "DUALFormer: Dual Graph Transformer." International Conference on Learning Representations, 2025.Markdown
[Zhuo et al. "DUALFormer: Dual Graph Transformer." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/zhuo2025iclr-dualformer/)BibTeX
@inproceedings{zhuo2025iclr-dualformer,
title = {{DUALFormer: Dual Graph Transformer}},
author = {Zhuo, Jiaming and Liu, Yuwei and Lu, Yintong and Ma, Ziyi and Fu, Kun and Wang, Chuan and Guo, Yuanfang and Wang, Zhen and Cao, Xiaochun and Yang, Liang},
booktitle = {International Conference on Learning Representations},
year = {2025},
url = {https://mlanthology.org/iclr/2025/zhuo2025iclr-dualformer/}
}