Robust Learning for Data Poisoning Attacks
Abstract
We investigate the robustness of stochastic approximation approaches against data poisoning attacks. We focus on two-layer neural networks with ReLU activation and show that under a specific notion of separability in the RKHS induced by the infinite-width network, training (finite-width) networks with stochastic gradient descent is robust against data poisoning attacks. Interestingly, we find that in addition to a lower bound on the width of the network, which is standard in the literature, we also require a distribution-dependent upper bound on the width for robust generalization. We provide extensive empirical evaluations that support and validate our theoretical results.
Cite
Text
Wang et al. "Robust Learning for Data Poisoning Attacks." International Conference on Machine Learning, 2021.Markdown
[Wang et al. "Robust Learning for Data Poisoning Attacks." International Conference on Machine Learning, 2021.](https://mlanthology.org/icml/2021/wang2021icml-robust-a/)BibTeX
@inproceedings{wang2021icml-robust-a,
title = {{Robust Learning for Data Poisoning Attacks}},
author = {Wang, Yunjuan and Mianjy, Poorya and Arora, Raman},
booktitle = {International Conference on Machine Learning},
year = {2021},
pages = {10859-10869},
volume = {139},
url = {https://mlanthology.org/icml/2021/wang2021icml-robust-a/}
}