Adversarial Examples for Graph Data: Deep Insights into Attack and Defense

Abstract

Graph deep learning models, such as graph convolutional networks (GCN) achieve state-of-the-art performance for tasks on graph data. However, similar to other deep learning models, graph deep learning models are susceptible to adversarial attacks. However, compared with non-graph data the discrete nature of the graph connections and features provide unique challenges and opportunities for adversarial attacks and defenses. In this paper, we propose techniques for both an adversarial attack and a defense against adversarial attacks. Firstly, we show that the problem of discrete graph connections and the discrete features of common datasets can be handled by using the integrated gradient technique that accurately determines the effect of changing selected features or edges while still benefiting from parallel computations. In addition, we show that an adversarially manipulated graph using a targeted attack statistically differs from un-manipulated graphs. Based on this observation, we propose a defense approach which can detect and recover a potential adversarial perturbation. Our experiments on a number of datasets show the effectiveness of the proposed techniques.

Cite

Text

Wu et al. "Adversarial Examples for Graph Data: Deep Insights into Attack and Defense." International Joint Conference on Artificial Intelligence, 2019. doi:10.24963/IJCAI.2019/669

Markdown

[Wu et al. "Adversarial Examples for Graph Data: Deep Insights into Attack and Defense." International Joint Conference on Artificial Intelligence, 2019.](https://mlanthology.org/ijcai/2019/wu2019ijcai-adversarial/) doi:10.24963/IJCAI.2019/669

BibTeX

@inproceedings{wu2019ijcai-adversarial,
  title     = {{Adversarial Examples for Graph Data: Deep Insights into Attack and Defense}},
  author    = {Wu, Huijun and Wang, Chen and Tyshetskiy, Yuriy and Docherty, Andrew and Lu, Kai and Zhu, Liming},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2019},
  pages     = {4816-4823},
  doi       = {10.24963/IJCAI.2019/669},
  url       = {https://mlanthology.org/ijcai/2019/wu2019ijcai-adversarial/}
}