Model-Targeted Poisoning Attacks with Provable Convergence
Abstract
In a poisoning attack, an adversary who controls a small fraction of the training data attempts to select that data, so a model is induced that misbehaves in a particular way. We consider poisoning attacks against convex machine learning models and propose an efficient poisoning attack designed to induce a model specified by the adversary. Unlike previous model-targeted poisoning attacks, our attack comes with provable convergence to any attainable target model. We also provide a lower bound on the minimum number of poisoning points needed to achieve a given target model. Our method uses online convex optimization and finds poisoning points incrementally. This provides more flexibility than previous attacks which require an a priori assumption about the number of poisoning points. Our attack is the first model-targeted poisoning attack that provides provable convergence for convex models. In our experiments, it either exceeds or matches state-of-the-art attacks in terms of attack success rate and distance to the target model.
Cite
Text
Suya et al. "Model-Targeted Poisoning Attacks with Provable Convergence." International Conference on Machine Learning, 2021.Markdown
[Suya et al. "Model-Targeted Poisoning Attacks with Provable Convergence." International Conference on Machine Learning, 2021.](https://mlanthology.org/icml/2021/suya2021icml-modeltargeted/)BibTeX
@inproceedings{suya2021icml-modeltargeted,
title = {{Model-Targeted Poisoning Attacks with Provable Convergence}},
author = {Suya, Fnu and Mahloujifar, Saeed and Suri, Anshuman and Evans, David and Tian, Yuan},
booktitle = {International Conference on Machine Learning},
year = {2021},
pages = {10000-10010},
volume = {139},
url = {https://mlanthology.org/icml/2021/suya2021icml-modeltargeted/}
}