GNN Predictions on K-Hop Egonets Boosts Adversarial Robustness

Abstract

Like many other deep learning models, Graph Neural Networks (GNNs) have been shown to be susceptible to adversarial attacks, i.e., the addition of crafted imperceptible noise to input data changes the model predictions drastically. We propose a very simple method k-HOP-PURIFY which makes node predictions on a k-hop Egonet centered at the node instead of the entire graph boosts adversarial accuracies. This could be used both as i) a post-processing step after applying popular defenses or ii) as a standalone defense method which is comparable to many other competitors. The method is extremely lightweight and scalable (takes 4 lines of code to implement) unlike many other defense methods which are computationally expensive or rely on heuristics. We show performance gains through extensive experimentation across various types of attacks (poison/evasion, targetted/untargeted), perturbation rates, and defenses implemented in the DeepRobust Library.

Cite

Text

Vora. "GNN Predictions on K-Hop Egonets Boosts Adversarial Robustness." NeurIPS 2023 Workshops: GLFrontiers, 2023.

Markdown

[Vora. "GNN Predictions on K-Hop Egonets Boosts Adversarial Robustness." NeurIPS 2023 Workshops: GLFrontiers, 2023.](https://mlanthology.org/neuripsw/2023/vora2023neuripsw-gnn/)

BibTeX

@inproceedings{vora2023neuripsw-gnn,
  title     = {{GNN Predictions on K-Hop Egonets Boosts Adversarial Robustness}},
  author    = {Vora, Jian},
  booktitle = {NeurIPS 2023 Workshops: GLFrontiers},
  year      = {2023},
  url       = {https://mlanthology.org/neuripsw/2023/vora2023neuripsw-gnn/}
}