Using Mutliple Self-Supervised Tasks Improves Model Robustness

Abstract

Deep networks achieve state-of-the-art performance on computer vision tasks, yet they fail under adversarial attacks that are imperceptible to humans. In this paper, we propose a novel defense that can dynamically adapt the input using the intrinsic structure from multiple self-supervised tasks. By simultaneously using many self-supervised tasks, our defense avoids over-fitting the adapted image to one specific self-supervised task and restores more intrinsic structure in the image compared to a single self-supervised task approach. Our approach further improves robustness and clean accuracy significantly compared to the state-of-the-art single task self-supervised defense. Our work is the first to connect multiple self-supervised tasks to robustness, and suggests that we can achieve better robustness with more intrinsic signal from visual data.

Cite

Text

Lawhon et al. "Using Mutliple Self-Supervised Tasks Improves Model Robustness." ICLR 2022 Workshops: PAIR2Struct, 2022.

Markdown

[Lawhon et al. "Using Mutliple Self-Supervised Tasks Improves Model Robustness." ICLR 2022 Workshops: PAIR2Struct, 2022.](https://mlanthology.org/iclrw/2022/lawhon2022iclrw-using/)

BibTeX

@inproceedings{lawhon2022iclrw-using,
  title     = {{Using Mutliple Self-Supervised Tasks Improves Model Robustness}},
  author    = {Lawhon, Matthew and Mao, Chengzhi and Yang, Junfeng},
  booktitle = {ICLR 2022 Workshops: PAIR2Struct},
  year      = {2022},
  url       = {https://mlanthology.org/iclrw/2022/lawhon2022iclrw-using/}
}