Unraveling Meta-Learning: Understanding Feature Representations for Few-Shot Tasks
Abstract
Meta-learning algorithms produce feature extractors which achieve state-of-the-art performance on few-shot classification. While the literature is rich with meta-learning methods, little is known about why the resulting feature extractors perform so well. We develop a better understanding of the underlying mechanics of meta-learning and the difference between models trained using meta-learning and models which are trained classically. In doing so, we introduce and verify several hypotheses for why meta-learned models perform better. Furthermore, we develop a regularizer which boosts the performance of standard training routines for few-shot classification. In many cases, our routine outperforms meta-learning while simultaneously running an order of magnitude faster.
Cite
Text
Goldblum et al. "Unraveling Meta-Learning: Understanding Feature Representations for Few-Shot Tasks." International Conference on Machine Learning, 2020.Markdown
[Goldblum et al. "Unraveling Meta-Learning: Understanding Feature Representations for Few-Shot Tasks." International Conference on Machine Learning, 2020.](https://mlanthology.org/icml/2020/goldblum2020icml-unraveling/)BibTeX
@inproceedings{goldblum2020icml-unraveling,
title = {{Unraveling Meta-Learning: Understanding Feature Representations for Few-Shot Tasks}},
author = {Goldblum, Micah and Reich, Steven and Fowl, Liam and Ni, Renkun and Cherepanova, Valeriia and Goldstein, Tom},
booktitle = {International Conference on Machine Learning},
year = {2020},
pages = {3607-3616},
volume = {119},
url = {https://mlanthology.org/icml/2020/goldblum2020icml-unraveling/}
}