Adversarial Training for Graph Convolutional Networks: Stability and Generalization Analysis
Abstract
Recently, numerous methods have been proposed to enhance the robustness of the Graph Convolutional Networks (GCNs) for their vulnerability against adversarial attacks. Despite their empirical success, a significant gap remains in understanding GCNs' adversarial robustness from the theoretical perspective. This paper addresses this gap by analyzing generalization against both node and structure attacks for multi-layer GCNs through the framework of uniform stability. Under the smoothness assumption of the loss function, we establish the first adversarial generalization bound of GCNs in expectation. Our theoretical analysis contributes to a deeper understanding of how adversarial perturbations and graph architectures influence generalization performance, which provides meaningful insights for designing robust models. Experimental results on benchmark datasets confirm the validity of our theoretical findings, highlighting their practical significance.
Cite
Text
Cao et al. "Adversarial Training for Graph Convolutional Networks: Stability and Generalization Analysis." International Joint Conference on Artificial Intelligence, 2025. doi:10.24963/IJCAI.2025/534Markdown
[Cao et al. "Adversarial Training for Graph Convolutional Networks: Stability and Generalization Analysis." International Joint Conference on Artificial Intelligence, 2025.](https://mlanthology.org/ijcai/2025/cao2025ijcai-adversarial/) doi:10.24963/IJCAI.2025/534BibTeX
@inproceedings{cao2025ijcai-adversarial,
title = {{Adversarial Training for Graph Convolutional Networks: Stability and Generalization Analysis}},
author = {Cao, Chang and Li, Han and Wang, Yulong and Wu, Rui and Chen, Hong},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2025},
pages = {4797-4805},
doi = {10.24963/IJCAI.2025/534},
url = {https://mlanthology.org/ijcai/2025/cao2025ijcai-adversarial/}
}