Large-Scale Adversarial Training for Vision-and-Language Representation Learning

Abstract

We present VILLA, the first known effort on large-scale adversarial training for vision-and-language (V+L) representation learning. VILLA consists of two training stages: (i) task-agnostic adversarial pre-training; followed by (ii) task-specific adversarial finetuning. Instead of adding adversarial perturbations on image pixels and textual tokens, we propose to perform adversarial training in the embedding space of each modality. To enable large-scale training, we adopt the ``free'' adversarial training strategy, and combine it with KL-divergence-based regularization to promote higher invariance in the embedding space. We apply VILLA to current best-performing V+L models, and achieve new state of the art on a wide range of tasks, including Visual Question Answering, Visual Commonsense Reasoning, Image-Text Retrieval, Referring Expression Comprehension, Visual Entailment, and NLVR2.

Cite

Text

Gan et al. "Large-Scale Adversarial Training for Vision-and-Language Representation Learning." Neural Information Processing Systems, 2020.

Markdown

[Gan et al. "Large-Scale Adversarial Training for Vision-and-Language Representation Learning." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/gan2020neurips-largescale/)

BibTeX

@inproceedings{gan2020neurips-largescale,
  title     = {{Large-Scale Adversarial Training for Vision-and-Language Representation Learning}},
  author    = {Gan, Zhe and Chen, Yen-Chun and Li, Linjie and Zhu, Chen and Cheng, Yu and Liu, Jingjing},
  booktitle = {Neural Information Processing Systems},
  year      = {2020},
  url       = {https://mlanthology.org/neurips/2020/gan2020neurips-largescale/}
}