A Simple and yet Fairly Effective Defense for Graph Neural Networks
Abstract
Graph neural networks (GNNs) have become the standard approach for performing machine learning on graphs. However, concerns have been raised regarding their vulnerability to small adversarial perturbations. Existing defense methods suffer from high time complexity and can negatively impact the model's performance on clean graphs. In this paper, we propose NoisyGCN, a defense method that injects noise into the GCN architecture. We derive a mathematical upper bound linking GCN's robustness to noise injection, establishing our method's effectiveness. Through empirical evaluations on the node classification task, we demonstrate superior or comparable performance to existing methods while minimizing the added time complexity.
Cite
Text
Ennadir et al. "A Simple and yet Fairly Effective Defense for Graph Neural Networks." ICML 2023 Workshops: AdvML-Frontiers, 2023.Markdown
[Ennadir et al. "A Simple and yet Fairly Effective Defense for Graph Neural Networks." ICML 2023 Workshops: AdvML-Frontiers, 2023.](https://mlanthology.org/icmlw/2023/ennadir2023icmlw-simple/)BibTeX
@inproceedings{ennadir2023icmlw-simple,
title = {{A Simple and yet Fairly Effective Defense for Graph Neural Networks}},
author = {Ennadir, Sofiane and Abbahaddou, Yassine and Vazirgiannis, Michalis and Boström, Henrik},
booktitle = {ICML 2023 Workshops: AdvML-Frontiers},
year = {2023},
url = {https://mlanthology.org/icmlw/2023/ennadir2023icmlw-simple/}
}