Multitask Learning Strengthens Adversarial Robustness
Abstract
Although deep networks achieve strong accuracy on a range of computer vision benchmarks, they remain vulnerable to adversarial attacks, where imperceptible input perturbations fool the network. We present both theoretical and empirical analyses that connect the adversarial robustness of a model to the number of tasks that it is trained on. Experiments on two datasets show that attack difficulty increases as the number of target tasks increase. Moreover, our results suggest that when models are trained on multiple tasks at once, they become more robust to adversarial attacks on individual tasks. While adversarial defense remains an open challenge, our results suggest that deep networks are vulnerable partly because they are trained on too few tasks.
Cite
Text
Mao et al. "Multitask Learning Strengthens Adversarial Robustness." Proceedings of the European Conference on Computer Vision (ECCV), 2020. doi:10.1007/978-3-030-58536-5_10Markdown
[Mao et al. "Multitask Learning Strengthens Adversarial Robustness." Proceedings of the European Conference on Computer Vision (ECCV), 2020.](https://mlanthology.org/eccv/2020/mao2020eccv-multitask/) doi:10.1007/978-3-030-58536-5_10BibTeX
@inproceedings{mao2020eccv-multitask,
title = {{Multitask Learning Strengthens Adversarial Robustness}},
author = {Mao, Chengzhi and Gupta, Amogh and Nitin, Vikram and Ray, Baishakhi and Song, Shuran and Yang, Junfeng and Vondrick, Carl},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2020},
doi = {10.1007/978-3-030-58536-5_10},
url = {https://mlanthology.org/eccv/2020/mao2020eccv-multitask/}
}