Evolutionary Generalized Zero-Shot Learning

Abstract

Graph learning models have been empirically proven to be vulnerable to backdoor threats, wherein adversaries submit trigger-embedded inputs to manipulate the model predictions. Current graph backdoor defenses manifest several limitations: 1) dependence on model-related details, 2) necessitation of additional fine-tuning, and 3) reliance on extra explainability tools, all of which are infeasible under stringent privacy policies. To address those limitations, we propose GraphProt, a certified black-box defense method to suppress backdoor attacks on GNN-based graph classifiers. Our GraphProt operates in a model-agnostic manner and solely leverages graph input. Specifically, GraphProt first introduces designed topology-feature-filtration to mitigate graph anomalies. Subsequently, subgraphs are sampled via a formulated strategy integrating topology and features, followed by a robust model inference through a majority vote-based subgraph prediction ensemble. Our results across benchmark attacks and datasets show GraphProt effectively reduces attack success rates while preserving regular graph classification accuracy.

Cite

Text

Chen et al. "Evolutionary Generalized Zero-Shot Learning." International Joint Conference on Artificial Intelligence, 2024. doi:10.24963/ijcai.2024/70

Markdown

[Chen et al. "Evolutionary Generalized Zero-Shot Learning." International Joint Conference on Artificial Intelligence, 2024.](https://mlanthology.org/ijcai/2024/chen2024ijcai-evolutionary/) doi:10.24963/ijcai.2024/70

BibTeX

@inproceedings{chen2024ijcai-evolutionary,
  title     = {{Evolutionary Generalized Zero-Shot Learning}},
  author    = {Chen, Dubing and Jiang, Chenyi and Zhang, Haofeng},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2024},
  pages     = {632-640},
  doi       = {10.24963/ijcai.2024/70},
  url       = {https://mlanthology.org/ijcai/2024/chen2024ijcai-evolutionary/}
}