Adversarially Robust Multi-Task Representation Learning

Abstract

We study adversarially robust transfer learning, wherein, given labeled data on multiple (source) tasks, the goal is to train a model with small robust error on a previously unseen (target) task.In particular, we consider a multi-task representation learning (MTRL) setting, i.e., we assume that the source and target tasks admit a simple (linear) predictor on top of a shared representation (e.g., the final hidden layer of a deep neural network).In this general setting, we provide rates on~the excess adversarial (transfer) risk for Lipschitz losses and smooth nonnegative losses.These rates show that learning a representation using adversarial training on diverse tasks helps protect against inference-time attacks in data-scarce environments.Additionally, we provide novel rates for the single-task setting.

Cite

Text

Watkins et al. "Adversarially Robust Multi-Task Representation Learning." Neural Information Processing Systems, 2024. doi:10.52202/079017-4418

Markdown

[Watkins et al. "Adversarially Robust Multi-Task Representation Learning." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/watkins2024neurips-adversarially/) doi:10.52202/079017-4418

BibTeX

@inproceedings{watkins2024neurips-adversarially,
  title     = {{Adversarially Robust Multi-Task Representation Learning}},
  author    = {Watkins, Austin and Nguyen-Tang, Thanh and Ullah, Enayat and Arora, Raman},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-4418},
  url       = {https://mlanthology.org/neurips/2024/watkins2024neurips-adversarially/}
}