Adversarial Regression with Multiple Learners
Abstract
Despite the considerable success enjoyed by machine learning techniques in practice, numerous studies demonstrated that many approaches are vulnerable to attacks. An important class of such attacks involves adversaries changing features at test time to cause incorrect predictions. Previous investigations of this problem pit a single learner against an adversary. However, in many situations an adversary’s decision is aimed at a collection of learners, rather than specifically targeted at each independently. We study the problem of adversarial linear regression with multiple learners. We approximate the resulting game by exhibiting an upper bound on learner loss functions, and show that the resulting game has a unique symmetric equilibrium. We present an algorithm for computing this equilibrium, and show through extensive experiments that equilibrium models are significantly more robust than conventional regularized linear regression.
Cite
Text
Tong et al. "Adversarial Regression with Multiple Learners." International Conference on Machine Learning, 2018.Markdown
[Tong et al. "Adversarial Regression with Multiple Learners." International Conference on Machine Learning, 2018.](https://mlanthology.org/icml/2018/tong2018icml-adversarial/)BibTeX
@inproceedings{tong2018icml-adversarial,
title = {{Adversarial Regression with Multiple Learners}},
author = {Tong, Liang and Yu, Sixie and Alfeld, Scott and Vorobeychik, },
booktitle = {International Conference on Machine Learning},
year = {2018},
pages = {4946-4954},
volume = {80},
url = {https://mlanthology.org/icml/2018/tong2018icml-adversarial/}
}