Precise Accuracy / Robustness Tradeoffs in Regression: Case of General Norms
Abstract
In this paper, we investigate the impact of test-time adversarial attacks on linear regression models and determine the optimal level of robustness that any model can reach while maintaining a given level of standard predictive performance (accuracy). Through quantitative estimates, we uncover fundamental tradeoffs between adversarial robustness and accuracy in different regimes. We obtain a precise characterization which distinguishes between regimes where robustness is achievable without hurting standard accuracy and regimes where a tradeoff might be unavoidable. Our findings are empirically confirmed with simple experiments that represent a variety of settings. This work covers feature covariance matrices and attack norms of any nature, extending previous works in this area.
Cite
Text
Dohmatob and Scetbon. "Precise Accuracy / Robustness Tradeoffs in Regression: Case of General Norms." International Conference on Machine Learning, 2024.Markdown
[Dohmatob and Scetbon. "Precise Accuracy / Robustness Tradeoffs in Regression: Case of General Norms." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/dohmatob2024icml-precise/)BibTeX
@inproceedings{dohmatob2024icml-precise,
title = {{Precise Accuracy / Robustness Tradeoffs in Regression: Case of General Norms}},
author = {Dohmatob, Elvis and Scetbon, Meyer},
booktitle = {International Conference on Machine Learning},
year = {2024},
pages = {11198-11226},
volume = {235},
url = {https://mlanthology.org/icml/2024/dohmatob2024icml-precise/}
}