Certified Defenses for Data Poisoning Attacks

Abstract

Machine learning systems trained on user-provided data are susceptible to data poisoning attacks, whereby malicious users inject false training data with the aim of corrupting the learned model. While recent work has proposed a number of attacks and defenses, little is understood about the worst-case loss of a defense in the face of a determined attacker. We address this by constructing approximate upper bounds on the loss across a broad family of attacks, for defenders that first perform outlier removal followed by empirical risk minimization. Our approximation relies on two assumptions: (1) that the dataset is large enough for statistical concentration between train and test error to hold, and (2) that outliers within the clean (non-poisoned) data do not have a strong effect on the model. Our bound comes paired with a candidate attack that often nearly matches the upper bound, giving us a powerful tool for quickly assessing defenses on a given dataset. Empirically, we find that even under a simple defense, the MNIST-1-7 and Dogfish datasets are resilient to attack, while in contrast the IMDB sentiment dataset can be driven from 12% to 23% test error by adding only 3% poisoned data.

Cite

Text

Steinhardt et al. "Certified Defenses for Data Poisoning Attacks." Neural Information Processing Systems, 2017.

Markdown

[Steinhardt et al. "Certified Defenses for Data Poisoning Attacks." Neural Information Processing Systems, 2017.](https://mlanthology.org/neurips/2017/steinhardt2017neurips-certified/)

BibTeX

@inproceedings{steinhardt2017neurips-certified,
  title     = {{Certified Defenses for Data Poisoning Attacks}},
  author    = {Steinhardt, Jacob and Koh, Pang Wei W and Liang, Percy},
  booktitle = {Neural Information Processing Systems},
  year      = {2017},
  pages     = {3517-3529},
  url       = {https://mlanthology.org/neurips/2017/steinhardt2017neurips-certified/}
}