Revisiting Robustness in Graph Machine Learning
Abstract
Many works show that node-level predictions of Graph Neural Networks (GNNs) are unrobust to small, often termed adversarial, changes to the graph structure. However, because manual inspection of a graph is difficult, it is unclear if the studied perturbations always preserve a core assumption of adversarial examples: that of unchanged semantic content. To address this problem, we introduce a more principled notion of an adversarial graph, which is aware of semantic content change. Using Contextual Stochastic Block Models (CSBMs) and real-world graphs, our results uncover: i) for a majority of nodes the prevalent perturbation models include a large fraction of perturbed graphs violating the unchanged semantics assumption; ii) surprisingly, all assessed GNNs show over-robustness - that is robustness beyond the point of semantic change. We find this to be a complementary phenomenon to adversarial robustness related to the small degree of nodes and their class membership dependence on the neighbourhood structure.
Cite
Text
Gosch et al. "Revisiting Robustness in Graph Machine Learning." NeurIPS 2022 Workshops: MLSW, 2022.Markdown
[Gosch et al. "Revisiting Robustness in Graph Machine Learning." NeurIPS 2022 Workshops: MLSW, 2022.](https://mlanthology.org/neuripsw/2022/gosch2022neuripsw-revisiting/)BibTeX
@inproceedings{gosch2022neuripsw-revisiting,
title = {{Revisiting Robustness in Graph Machine Learning}},
author = {Gosch, Lukas and Sturm, Daniel and Geisler, Simon and Günnemann, Stephan},
booktitle = {NeurIPS 2022 Workshops: MLSW},
year = {2022},
url = {https://mlanthology.org/neuripsw/2022/gosch2022neuripsw-revisiting/}
}