Multimodal Few-Shot Learning with Frozen Language Models

Abstract

When trained at sufficient scale, auto-regressive language models exhibit the notable ability to learn a new language task after being prompted with just a few examples. Here, we present a simple, yet effective, approach for transferring this few-shot learning ability to a multimodal setting (vision and language). Using aligned image and caption data, we train a vision encoder to represent each image as a sequence of continuous embeddings, such that a pre-trained, frozen language model presented with this prefix generates the appropriate caption. The resulting system is a multimodal few-shot learner, with the surprising ability to learn a variety of new tasks when conditioned on examples, represented as a sequence of any number of interleaved image and text embeddings. We demonstrate that it can rapidly learn words for new objects and novel visual categories, do visual question-answering with only a handful of examples, and make use of outside knowledge, by measuring a single model on a variety of established and new benchmarks.

Cite

Text

Tsimpoukelli et al. "Multimodal Few-Shot Learning with Frozen Language Models." Neural Information Processing Systems, 2021.

Markdown

[Tsimpoukelli et al. "Multimodal Few-Shot Learning with Frozen Language Models." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/tsimpoukelli2021neurips-multimodal/)

BibTeX

@inproceedings{tsimpoukelli2021neurips-multimodal,
  title     = {{Multimodal Few-Shot Learning with Frozen Language Models}},
  author    = {Tsimpoukelli, Maria and Menick, Jacob L and Cabi, Serkan and Eslami, S. M. Ali and Vinyals, Oriol and Hill, Felix},
  booktitle = {Neural Information Processing Systems},
  year      = {2021},
  url       = {https://mlanthology.org/neurips/2021/tsimpoukelli2021neurips-multimodal/}
}