Enhancing Spectral GNNs: From Topology and Perturbation Perspectives

Abstract

Spectral Graph Neural Networks process graph signals using the spectral properties of the normalized graph Laplacian matrix. However, the frequent occurrence of repeated eigenvalues limits the expressiveness of spectral GNNs. To address this, we propose a higher-dimensional sheaf Laplacian matrix, which not only encodes the graph’s topological information but also increases the upper bound on the number of distinct eigenvalues. The sheaf Laplacian matrix is derived from carefully designed perturbations of the block form of the normalized graph Laplacian, yielding a perturbed sheaf Laplacian (PSL) matrix with more distinct eigenvalues. We provide a theoretical analysis of the expressiveness of spectral GNNs equipped with the PSL and establish perturbation bounds for the eigenvalues. Extensive experiments on benchmark datasets for node classification demonstrate that incorporating the perturbed sheaf Laplacian enhances the performance of spectral GNNs.

Cite

Text

Qin et al. "Enhancing Spectral GNNs: From Topology and Perturbation Perspectives." Proceedings of the 42nd International Conference on Machine Learning, 2025.

Markdown

[Qin et al. "Enhancing Spectral GNNs: From Topology and Perturbation Perspectives." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/qin2025icml-enhancing/)

BibTeX

@inproceedings{qin2025icml-enhancing,
  title     = {{Enhancing Spectral GNNs: From Topology and Perturbation Perspectives}},
  author    = {Qin, Taoyang and Chen, Ke-Jia and Liu, Zheng},
  booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
  year      = {2025},
  pages     = {50214-50234},
  volume    = {267},
  url       = {https://mlanthology.org/icml/2025/qin2025icml-enhancing/}
}