Manipulating SGD with Data Ordering Attacks
Abstract
Machine learning is vulnerable to a wide variety of attacks. It is now well understood that by changing the underlying data distribution, an adversary can poison the model trained with it or introduce backdoors. In this paper we present a novel class of training-time attacks that require no changes to the underlying dataset or model architecture, but instead only change the order in which data are supplied to the model. In particular, we find that the attacker can either prevent the model from learning, or poison it to learn behaviours specified by the attacker. Furthermore, we find that even a single adversarially-ordered epoch can be enough to slow down model learning, or even to reset all of the learning progress. Indeed, the attacks presented here are not specific to the model or dataset, but rather target the stochastic nature of modern learning procedures. We extensively evaluate our attacks on computer vision and natural language benchmarks to find that the adversary can disrupt model training and even introduce backdoors.
Cite
Text
Shumailov et al. "Manipulating SGD with Data Ordering Attacks." Neural Information Processing Systems, 2021.Markdown
[Shumailov et al. "Manipulating SGD with Data Ordering Attacks." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/shumailov2021neurips-manipulating/)BibTeX
@inproceedings{shumailov2021neurips-manipulating,
title = {{Manipulating SGD with Data Ordering Attacks}},
author = {Shumailov, I and Shumaylov, Zakhar and Kazhdan, Dmitry and Zhao, Yiren and Papernot, Nicolas and Erdogdu, Murat A and Anderson, Ross J},
booktitle = {Neural Information Processing Systems},
year = {2021},
url = {https://mlanthology.org/neurips/2021/shumailov2021neurips-manipulating/}
}