How to Backdoor Federated Learning
Abstract
Federated models are created by aggregating model updates submittedby participants. To protect confidentiality of the training data,the aggregator by design has no visibility into how these updates aregenerated. We show that this makes federated learning vulnerable to amodel-poisoning attack that is significantly more powerful than poisoningattacks that target only the training data.A single or multiple malicious participants can use modelreplacement to introduce backdoor functionality into the joint model,e.g., modify an image classifier so that it assigns an attacker-chosenlabel to images with certain features, or force a word predictor tocomplete certain sentences with an attacker-chosen word. We evaluatemodel replacement under different assumptions for the standardfederated-learning tasks and show that it greatly outperformstraining-data poisoning.Federated learning employs secure aggregation to protect confidentialityof participants’ local models and thus cannot detect anomalies inparticipants’ contributions to the joint model. To demonstrate thatanomaly detection would not have been effective in any case, we alsodevelop and evaluate a generic constrain-and-scale technique thatincorporates the evasion of defenses into the attacker’s loss functionduring training.
Cite
Text
Bagdasaryan et al. "How to Backdoor Federated Learning." Artificial Intelligence and Statistics, 2020.Markdown
[Bagdasaryan et al. "How to Backdoor Federated Learning." Artificial Intelligence and Statistics, 2020.](https://mlanthology.org/aistats/2020/bagdasaryan2020aistats-backdoor/)BibTeX
@inproceedings{bagdasaryan2020aistats-backdoor,
title = {{How to Backdoor Federated Learning}},
author = {Bagdasaryan, Eugene and Veit, Andreas and Hua, Yiqing and Estrin, Deborah and Shmatikov, Vitaly},
booktitle = {Artificial Intelligence and Statistics},
year = {2020},
pages = {2938-2948},
volume = {108},
url = {https://mlanthology.org/aistats/2020/bagdasaryan2020aistats-backdoor/}
}