Poisoning $\times$ Evasion: Symbiotic Adversarial Robustness for Graph Neural Networks
Abstract
It is well-known that deep learning models are vulnerable w.r.t. small input perturbations. Such perturbed instances are called adversarial examples. Adversarial examples are commonly crafted to fool a model either at training time (poisoning) or test time (evasion). In this work, we study the symbiosis of poisoning and evasion. We show that combining both threat models can substantially improve the devastating efficacy of adversarial attacks. Specifically, we study the robustness of Graph Neural Networks (GNNs) under structure perturbations and devise a memory-efficient adaptive end-to-end attack for the novel threat model using first-order optimization.
Cite
Text
Erdogan et al. "Poisoning $\times$ Evasion: Symbiotic Adversarial Robustness for Graph Neural Networks." NeurIPS 2023 Workshops: GLFrontiers, 2023.Markdown
[Erdogan et al. "Poisoning $\times$ Evasion: Symbiotic Adversarial Robustness for Graph Neural Networks." NeurIPS 2023 Workshops: GLFrontiers, 2023.](https://mlanthology.org/neuripsw/2023/erdogan2023neuripsw-poisoning/)BibTeX
@inproceedings{erdogan2023neuripsw-poisoning,
title = {{Poisoning $\times$ Evasion: Symbiotic Adversarial Robustness for Graph Neural Networks}},
author = {Erdogan, Ege and Geisler, Simon and Günnemann, Stephan},
booktitle = {NeurIPS 2023 Workshops: GLFrontiers},
year = {2023},
url = {https://mlanthology.org/neuripsw/2023/erdogan2023neuripsw-poisoning/}
}