Specformer: Spectral Graph Neural Networks Meet Transformers
Abstract
Spectral graph neural networks (GNNs) learn graph representations via spectral-domain graph convolutions. However, most existing spectral graph filters are scalar-to-scalar functions, i.e., mapping a single eigenvalue to a single filtered value, thus ignoring the global pattern of the spectrum. Furthermore, these filters are often constructed based on some fixed-order polynomials, which have limited expressiveness and flexibility. To tackle these issues, we introduce Specformer, which effectively encodes the set of all eigenvalues and performs self-attention in the spectral domain, leading to a learnable set-to-set spectral filter. We also design a decoder with learnable bases to enable non-local graph convolution. Importantly, Specformer is equivariant to permutation. By stacking multiple Specformer layers, one can build a powerful spectral GNN. On synthetic datasets, we show that our Specformer can better recover ground-truth spectral filters than other spectral GNNs. Extensive experiments of both node-level and graph-level tasks on real-world graph datasets show that our Specformer outperforms state-of-the-art GNNs and learns meaningful spectrum patterns. Code and data are available at https://github.com/bdy9527/Specformer.
Cite
Text
Bo et al. "Specformer: Spectral Graph Neural Networks Meet Transformers." International Conference on Learning Representations, 2023.Markdown
[Bo et al. "Specformer: Spectral Graph Neural Networks Meet Transformers." International Conference on Learning Representations, 2023.](https://mlanthology.org/iclr/2023/bo2023iclr-specformer/)BibTeX
@inproceedings{bo2023iclr-specformer,
title = {{Specformer: Spectral Graph Neural Networks Meet Transformers}},
author = {Bo, Deyu and Shi, Chuan and Wang, Lele and Liao, Renjie},
booktitle = {International Conference on Learning Representations},
year = {2023},
url = {https://mlanthology.org/iclr/2023/bo2023iclr-specformer/}
}