Revisiting Robustness in Graph Machine Learning
Abstract
Many works show that node-level predictions of Graph Neural Networks (GNNs) are unrobust to small, often termed adversarial, changes to the graph structure. However, because manual inspection of a graph is difficult, it is unclear if the studied perturbations always preserve a core assumption of adversarial examples: that of unchanged semantic content. To address this problem, we introduce a more principled notion of an adversarial graph, which is aware of semantic content change. Using Contextual Stochastic Block Models (CSBMs) and real-world graphs, our results suggest: $i)$ for a majority of nodes the prevalent perturbation models include a large fraction of perturbed graphs violating the unchanged semantics assumption; $ii)$ surprisingly, all assessed GNNs show over-robustness - that is robustness beyond the point of semantic change. We find this to be a complementary phenomenon to adversarial examples and show that including the label-structure of the training graph into the inference process of GNNs significantly reduces over-robustness, while having a positive effect on test accuracy and adversarial robustness. Theoretically, leveraging our new semantics-aware notion of robustness, we prove that there is no robustness-accuracy tradeoff for inductively classifying a newly added node.
Cite
Text
Gosch et al. "Revisiting Robustness in Graph Machine Learning." International Conference on Learning Representations, 2023.Markdown
[Gosch et al. "Revisiting Robustness in Graph Machine Learning." International Conference on Learning Representations, 2023.](https://mlanthology.org/iclr/2023/gosch2023iclr-revisiting/)BibTeX
@inproceedings{gosch2023iclr-revisiting,
title = {{Revisiting Robustness in Graph Machine Learning}},
author = {Gosch, Lukas and Sturm, Daniel and Geisler, Simon and Günnemann, Stephan},
booktitle = {International Conference on Learning Representations},
year = {2023},
url = {https://mlanthology.org/iclr/2023/gosch2023iclr-revisiting/}
}