Graph Propagation Transformer for Graph Representation Learning

Abstract

This paper presents a novel transformer architecture for graph representation learning. The core insight of our method is to fully consider the information propagation among nodes and edges in a graph when building the attention module in the transformer blocks. Specifically, we propose a new attention mechanism called Graph Propagation Attention (GPA). It explicitly passes the information among nodes and edges in three ways, i.e. node-to-node, node-to-edge, and edge-to-node, which is essential for learning graph-structured data. On this basis, we design an effective transformer architecture named Graph Propagation Transformer (GPTrans) to further help learn graph data. We verify the performance of GPTrans in a wide range of graph learning experiments on several benchmark datasets. These results show that our method outperforms many state-of-the-art transformer-based graph models with better performance. The code will be released at https://github.com/czczup/GPTrans.

Cite

Text

Chen et al. "Graph Propagation Transformer for Graph Representation Learning." International Joint Conference on Artificial Intelligence, 2023. doi:10.24963/IJCAI.2023/396

Markdown

[Chen et al. "Graph Propagation Transformer for Graph Representation Learning." International Joint Conference on Artificial Intelligence, 2023.](https://mlanthology.org/ijcai/2023/chen2023ijcai-graph/) doi:10.24963/IJCAI.2023/396

BibTeX

@inproceedings{chen2023ijcai-graph,
  title     = {{Graph Propagation Transformer for Graph Representation Learning}},
  author    = {Chen, Zhe and Tan, Hao and Wang, Tao and Shen, Tianrun and Lu, Tong and Peng, Qiuying and Cheng, Cheng and Qi, Yue},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2023},
  pages     = {3559-3567},
  doi       = {10.24963/IJCAI.2023/396},
  url       = {https://mlanthology.org/ijcai/2023/chen2023ijcai-graph/}
}