Pitfalls in Evaluating GNNs Under Label Poisoning Attacks

Abstract

Graph Neural Networks (GNNs) have shown impressive performance on several graph-based tasks. However, recent research on adversarial attacks shows how sensitive GNNs are to node/edge/label perturbations. Of particular interest is the label poisoning attack, where flipping an unnoticeable fraction of training labels can adversely affect GNNs' performance. While several such attacks were proposed, the latent flaws in the evaluation setup cloud the true effectiveness of the attacks. In this work, we uncover 5 frequent pitfalls in the evaluation setup that plague all existing label-poisoning attacks for GNNs. We observe for some settings that the state-of-the-art attacks are no better than a random label-flipping attack. We propose and advocate for a new evaluation setup that remedies the shortcomings, and can help gauge the potency of label-poisoning attacks fairly. Post remedying the pitfalls, on the Cora-ML dataset, we see a difference in performance of up to 19.37%.

Cite

Text

Lingam et al. "Pitfalls in Evaluating GNNs Under Label Poisoning Attacks." ICLR 2023 Workshops: Trustworthy_ML, 2023.

Markdown

[Lingam et al. "Pitfalls in Evaluating GNNs Under Label Poisoning Attacks." ICLR 2023 Workshops: Trustworthy_ML, 2023.](https://mlanthology.org/iclrw/2023/lingam2023iclrw-pitfalls/)

BibTeX

@inproceedings{lingam2023iclrw-pitfalls,
  title     = {{Pitfalls in Evaluating GNNs Under Label Poisoning Attacks}},
  author    = {Lingam, Vijay and Akhondzadeh, Mohammad Sadegh and Bojchevski, Aleksandar},
  booktitle = {ICLR 2023 Workshops: Trustworthy_ML},
  year      = {2023},
  url       = {https://mlanthology.org/iclrw/2023/lingam2023iclrw-pitfalls/}
}