A Preliminary Study on the Feature Representations of Transfer Learning and Gradient-Based Meta-Learning Techniques
Abstract
Meta-learning receives considerable attention as an approach to enable deep neural networks to learn from a few data. Recent studies suggest that in specific cases, simply fine-tuning a pre-trained network may be more effective at learning new image classification tasks from limited data than more sophisticated meta-learning techniques such as MAML. This is surprising as the learning behaviour of MAML mimics that of fine-tuning. We investigate this phenomenon and show that the pre-trained features are more diverse and discriminative than those learned by MAML and Reptile, which specialize in adaptation in low-data regimes of similar data distributions as the one used for training. Due to this specialization, MAML and Reptile may be hampered in their ability to generalize to out-of-distribution tasks, whereas fine-tuning can fall back on the diversity of the learned features.
Cite
Text
Huisman et al. "A Preliminary Study on the Feature Representations of Transfer Learning and Gradient-Based Meta-Learning Techniques." NeurIPS 2021 Workshops: MetaLearn, 2021.Markdown
[Huisman et al. "A Preliminary Study on the Feature Representations of Transfer Learning and Gradient-Based Meta-Learning Techniques." NeurIPS 2021 Workshops: MetaLearn, 2021.](https://mlanthology.org/neuripsw/2021/huisman2021neuripsw-preliminary/)BibTeX
@inproceedings{huisman2021neuripsw-preliminary,
title = {{A Preliminary Study on the Feature Representations of Transfer Learning and Gradient-Based Meta-Learning Techniques}},
author = {Huisman, Mike and van Rijn, Jan N. and Plaat, Aske},
booktitle = {NeurIPS 2021 Workshops: MetaLearn},
year = {2021},
url = {https://mlanthology.org/neuripsw/2021/huisman2021neuripsw-preliminary/}
}