Unleashing the Power of Meta-Tuning for Few-Shot Generalization Through Sparse Interpolated Experts
Abstract
Recent successes suggest that parameter-efficient fine-tuning of foundation models is becoming the state-of-the-art method for transfer learning in vision, gradually replacing the rich literature of alternatives such as meta-learning. In trying to harness the best of both worlds, meta-tuning introduces a subsequent optimization stage of foundation models but has so far only shown limited success and crucially tends to underperform on out-of-distribution (OOD) tasks. In this paper, we introduce Sparse MetA-Tuning (SMAT), a method inspired by sparse mixture-of-experts approaches and trained to isolate subsets of pre-trained parameters automatically for meta-tuning on each task. SMAT successfully overcomes OOD sensitivity and delivers on the promise of enhancing the transfer abilities of vision foundation models beyond parameter-efficient finetuning. We establish new state-of-the-art results on a challenging combination of Meta-Dataset augmented with additional OOD tasks in both zero-shot and gradient-based adaptation settings. In addition, we provide a thorough analysis of the superiority of learned over hand-designed sparsity patterns for sparse expert methods and the pivotal importance of the sparsity level in balancing between in-distribution and out-of-distribution generalization. Our code and models are publicly available.
Cite
Text
Chen et al. "Unleashing the Power of Meta-Tuning for Few-Shot Generalization Through Sparse Interpolated Experts." International Conference on Machine Learning, 2024.Markdown
[Chen et al. "Unleashing the Power of Meta-Tuning for Few-Shot Generalization Through Sparse Interpolated Experts." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/chen2024icml-unleashing/)BibTeX
@inproceedings{chen2024icml-unleashing,
title = {{Unleashing the Power of Meta-Tuning for Few-Shot Generalization Through Sparse Interpolated Experts}},
author = {Chen, Shengzhuang and Tack, Jihoon and Yang, Yunqiao and Teh, Yee Whye and Schwarz, Jonathan Richard and Wei, Ying},
booktitle = {International Conference on Machine Learning},
year = {2024},
pages = {7280-7297},
volume = {235},
url = {https://mlanthology.org/icml/2024/chen2024icml-unleashing/}
}