FiT: Parameter Efficient Few-Shot Transfer Learning

Abstract

Model parameter efficiency is key for enabling few-shot learning, inexpensive model updates for personalization, and communication efficient federated learning. In this work, we develop FiLM Transfer (FiT) which combines ideas from transfer learning (fixed pretrained backbones and fine-tuned FiLM adapter layers) and meta-learning (automatically configured Naive Bayes classifiers and episodic training) to yield parameter efficient models with superior classification accuracy at low-shot. We experiment with FiT on a range of downstream datasets and show that it achieves better classification accuracy than the leading Big Transfer (BiT) algorithm at low-shot and achieves state-of-the art accuracy on the challenging VTAB-1k benchmark, with fewer than 1% of the updateable parameters.

Cite

Text

Shysheya et al. "FiT: Parameter Efficient Few-Shot Transfer Learning." NeurIPS 2022 Workshops: MetaLearn, 2022.

Markdown

[Shysheya et al. "FiT: Parameter Efficient Few-Shot Transfer Learning." NeurIPS 2022 Workshops: MetaLearn, 2022.](https://mlanthology.org/neuripsw/2022/shysheya2022neuripsw-fit/)

BibTeX

@inproceedings{shysheya2022neuripsw-fit,
  title     = {{FiT: Parameter Efficient Few-Shot Transfer Learning}},
  author    = {Shysheya, Aliaksandra and Bronskill, John F and Patacchiola, Massimiliano and Nowozin, Sebastian and Turner, Richard E},
  booktitle = {NeurIPS 2022 Workshops: MetaLearn},
  year      = {2022},
  url       = {https://mlanthology.org/neuripsw/2022/shysheya2022neuripsw-fit/}
}