Benchmarking the Effect of Poisoning Defenses on the Security and Bias of the Final Model
Abstract
Machine learning models are susceptible to a class of attacks known as adversarial poisoning where an adversary can maliciously manipulate training data to hinder model performance or, more concerningly, insert backdoors to exploit at inference time. Many methods have been proposed to defend against adversarial poisoning by either identifying the poisoned samples to facilitate removal or developing poison agnostic training algorithms. Although effective, these proposed approaches can have unintended consequences on other aspects of model performance, such as worsening performance on certain data sub-populations, thus inducing a classification bias. In this work, we evaluate several adversarial poisoning defenses. In addition to traditional security metrics, i.e., robustness to poisoned samples, we propose a new metric to measure the potential undesirable discrimination of sub-populations resulting from using these defenses. Our investigation highlights that many of the evaluated defenses trade decision fairness to achieve higher adversarial poisoning robustness. Given these results, we recommend our proposed metric to be part of standard evaluations of machine learning defenses.
Cite
Text
Baracaldo et al. "Benchmarking the Effect of Poisoning Defenses on the Security and Bias of the Final Model." NeurIPS 2022 Workshops: TSRML, 2022.Markdown
[Baracaldo et al. "Benchmarking the Effect of Poisoning Defenses on the Security and Bias of the Final Model." NeurIPS 2022 Workshops: TSRML, 2022.](https://mlanthology.org/neuripsw/2022/baracaldo2022neuripsw-benchmarking/)BibTeX
@inproceedings{baracaldo2022neuripsw-benchmarking,
title = {{Benchmarking the Effect of Poisoning Defenses on the Security and Bias of the Final Model}},
author = {Baracaldo, Nathalie and Eykholt, Kevin and Ahmed, Farhan and Zhou, Yi and Priya, Shriti and Lee, Taesung and Kadhe, Swanand and Tan, Yusong and Polavaram, Sridevi and Suggs, Sterling and Gao, Yuyang and Slater, David},
booktitle = {NeurIPS 2022 Workshops: TSRML},
year = {2022},
url = {https://mlanthology.org/neuripsw/2022/baracaldo2022neuripsw-benchmarking/}
}