Rethinking Spectral Augmentation for Contrast-Based Graph Self-Supervised Learning

Abstract

The recent surge in contrast-based graph self-supervised learning has prominently featured an intensified exploration of spectral cues. Spectral augmentation, which involves modifying a graph's spectral properties such as eigenvalues or eigenvectors, is widely believed to enhance model performance. However, an intriguing paradox emerges, as methods grounded in seemingly conflicting assumptions regarding the spectral domain demonstrate notable enhancements in learning performance. Through extensive empirical studies, we find that simple edge perturbations - random edge dropping for node-level and random edge adding for graph-level self-supervised learning - consistently yield comparable or superior performance while being significantly more computationally efficient. This suggests that the computational overhead of sophisticated spectral augmentations may not justify their practical benefits. Our theoretical analysis of the InfoNCE loss bounds for shallow GNNs further supports this observation. The proposed insights represent a significant leap forward in the field, potentially refining the understanding and implementation of graph self-supervised learning.

Cite

Text

Jian et al. "Rethinking Spectral Augmentation for Contrast-Based Graph Self-Supervised Learning." Transactions on Machine Learning Research, 2025.

Markdown

[Jian et al. "Rethinking Spectral Augmentation for Contrast-Based Graph Self-Supervised Learning." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/jian2025tmlr-rethinking/)

BibTeX

@article{jian2025tmlr-rethinking,
  title     = {{Rethinking Spectral Augmentation for Contrast-Based Graph Self-Supervised Learning}},
  author    = {Jian, Xiangru and Zhao, Xinjian and Pang, Wei and Ying, Chaolong and Wang, Yimu and Xu, Yaoyao and Yu, Tianshu},
  journal   = {Transactions on Machine Learning Research},
  year      = {2025},
  url       = {https://mlanthology.org/tmlr/2025/jian2025tmlr-rethinking/}
}