Enhancing the Antidote: Improved Pointwise Certifications Against Poisoning Attacks

Abstract

Poisoning attacks can disproportionately influence model behaviour by making small changes to the training corpus. While defences against specific poisoning attacks do exist, they in general do not provide any guarantees, leaving them potentially countered by novel attacks. In contrast, by examining worst-case behaviours Certified Defences make it possible to provide guarantees of the robustness of a sample against adversarial attacks modifying a finite number of training samples, known as pointwise certification. We achieve this by exploiting both Differential Privacy and the Sampled Gaussian Mechanism to ensure the invariance of prediction for each testing instance against finite numbers of poisoned examples. In doing so, our model provides guarantees of adversarial robustness that are more than twice as large as those provided by prior certifications.

Cite

Text

Liu et al. "Enhancing the Antidote: Improved Pointwise Certifications Against Poisoning Attacks." AAAI Conference on Artificial Intelligence, 2023. doi:10.1609/AAAI.V37I7.26065

Markdown

[Liu et al. "Enhancing the Antidote: Improved Pointwise Certifications Against Poisoning Attacks." AAAI Conference on Artificial Intelligence, 2023.](https://mlanthology.org/aaai/2023/liu2023aaai-enhancing/) doi:10.1609/AAAI.V37I7.26065

BibTeX

@inproceedings{liu2023aaai-enhancing,
  title     = {{Enhancing the Antidote: Improved Pointwise Certifications Against Poisoning Attacks}},
  author    = {Liu, Shijie and Cullen, Andrew C. and Montague, Paul and Erfani, Sarah M. and Rubinstein, Benjamin I. P.},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2023},
  pages     = {8861-8869},
  doi       = {10.1609/AAAI.V37I7.26065},
  url       = {https://mlanthology.org/aaai/2023/liu2023aaai-enhancing/}
}