Prismer: A Vision-Language Model with Multi-Task Experts

Abstract

Recent vision-language models have shown impressive multi-modal generation capabilities. However, typically they require training huge models on massive datasets. As a more scalable alternative, we introduce Prismer, a data- and parameter-efficient vision-language model that leverages an ensemble of task-specific experts. Prismer only requires training of a small number of components, with the majority of network weights inherited from multiple readily-available, pre-trained experts, and kept frozen during training. By leveraging experts from a wide range of domains, we show Prismer can efficiently pool this expert knowledge and adapt it to various vision-language reasoning tasks. In our experiments, we show that Prismer achieves fine-tuned and few-shot learning performance which is competitive with current state-of-the-arts, whilst requiring up to two orders of magnitude less training data. Code is available at https://github.com/NVlabs/prismer.

Cite

Text

Liu et al. "Prismer: A Vision-Language Model with Multi-Task Experts." Transactions on Machine Learning Research, 2024.

Markdown

[Liu et al. "Prismer: A Vision-Language Model with Multi-Task Experts." Transactions on Machine Learning Research, 2024.](https://mlanthology.org/tmlr/2024/liu2024tmlr-prismer/)

BibTeX

@article{liu2024tmlr-prismer,
  title     = {{Prismer: A Vision-Language Model with Multi-Task Experts}},
  author    = {Liu, Shikun and Fan, Linxi and Johns, Edward and Yu, Zhiding and Xiao, Chaowei and Anandkumar, Anima},
  journal   = {Transactions on Machine Learning Research},
  year      = {2024},
  url       = {https://mlanthology.org/tmlr/2024/liu2024tmlr-prismer/}
}