Improving Foundation Models for Few-Shot Learning via Multitask Finetuning
Abstract
Foundation models have become essential tools for AI. In this paper, we study the problem of adapting foundation models, pre-trained using contrastive learning, to downstream tasks with limited labels. We explore the paradigm of finetuning a foundation model before adapting to a target task, using a set of related tasks with a few labeled samples. We show both theoretically and empirically that with a diverse set of related tasks this finetuning leads to reduced error in the target task, when compared with directly adapting the same pre-trained model, e.g., at least 6\% target accuracy improvements on the miniImageNet.
Cite
Text
Xu et al. "Improving Foundation Models for Few-Shot Learning via Multitask Finetuning." ICLR 2023 Workshops: ME-FoMo, 2023.Markdown
[Xu et al. "Improving Foundation Models for Few-Shot Learning via Multitask Finetuning." ICLR 2023 Workshops: ME-FoMo, 2023.](https://mlanthology.org/iclrw/2023/xu2023iclrw-improving/)BibTeX
@inproceedings{xu2023iclrw-improving,
title = {{Improving Foundation Models for Few-Shot Learning via Multitask Finetuning}},
author = {Xu, Zhuoyan and Shi, Zhenmei and Wei, Junyi and Li, Yin and Liang, Yingyu},
booktitle = {ICLR 2023 Workshops: ME-FoMo},
year = {2023},
url = {https://mlanthology.org/iclrw/2023/xu2023iclrw-improving/}
}