On Sensitivity of Meta-Learning to Support Data
Abstract
Meta-learning algorithms are widely used for few-shot learning. For example, image recognition systems that readily adapt to unseen classes after seeing only a few labeled examples. Despite their success, we show that modern meta-learning algorithms are extremely sensitive to the data used for adaptation, i.e. support data. In particular, we demonstrate the existence of (unaltered, in-distribution, natural) images that, when used for adaptation, yield accuracy as low as 4\% or as high as 95\% on standard few-shot image classification benchmarks. We explain our empirical findings in terms of class margins, which in turn suggests that robust and safe meta-learning requires larger margins than supervised learning.
Cite
Text
Agarwal et al. "On Sensitivity of Meta-Learning to Support Data." Neural Information Processing Systems, 2021.Markdown
[Agarwal et al. "On Sensitivity of Meta-Learning to Support Data." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/agarwal2021neurips-sensitivity/)BibTeX
@inproceedings{agarwal2021neurips-sensitivity,
title = {{On Sensitivity of Meta-Learning to Support Data}},
author = {Agarwal, Mayank and Yurochkin, Mikhail and Sun, Yuekai},
booktitle = {Neural Information Processing Systems},
year = {2021},
url = {https://mlanthology.org/neurips/2021/agarwal2021neurips-sensitivity/}
}