Are Defenses for Graph Neural Networks Robust?

Abstract

A cursory reading of the literature suggests that we have made a lot of progress in designing effective adversarial defenses for Graph Neural Networks (GNNs). Yet, the standard methodology has a serious flaw – virtually all of the defenses are evaluated against non-adaptive attacks leading to overly optimistic robustness estimates. We perform a thorough robustness analysis of 7 of the most popular defenses spanning the entire spectrum of strategies, i.e., aimed at improving the graph, the architecture, or the training. The results are sobering – most defenses show no or only marginal improvement compared to an undefended baseline. We advocate using custom adaptive attacks as a gold standard and we outline the lessons we learned from successfully designing such attacks. Moreover, our diverse collection of perturbed graphs forms a (black-box) unit test offering a first glance at a model's robustness.

Cite

Text

Mujkanovic et al. "Are Defenses for Graph Neural Networks Robust?." Neural Information Processing Systems, 2022.

Markdown

[Mujkanovic et al. "Are Defenses for Graph Neural Networks Robust?." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/mujkanovic2022neurips-defenses/)

BibTeX

@inproceedings{mujkanovic2022neurips-defenses,
  title     = {{Are Defenses for Graph Neural Networks Robust?}},
  author    = {Mujkanovic, Felix and Geisler, Simon and Günnemann, Stephan and Bojchevski, Aleksandar},
  booktitle = {Neural Information Processing Systems},
  year      = {2022},
  url       = {https://mlanthology.org/neurips/2022/mujkanovic2022neurips-defenses/}
}