A Meta Understanding of Meta-Learning

Abstract

Recent years have witnessed an abundance of new publications and approaches on meta-learning. This community-wide enthusiasm has sparked great insights but has also created a plethora of seemingly different frameworks, which can be hard to compare and evaluate. In this paper, we aim to provide a single principled, unifying framework that draws a close connection between meta-learning and traditional supervised learning. By treating pairs of task-specific data sets and trained models as (feature, label) samples, we can reduce many meta-learning algorithms to instances of supervised learning. This view not only unifies meta-learning into an intuitive and practical framework but also allows us to transfer insights from supervised learning directly to improve meta-learning. For example, we obtain a better understanding of generalization properties, and we can readily transfer well-understood techniques, such as model ensemble, pre-training, joint training, data augmentation, and even nearest neighbor based methods. We provide an intuitive analogy of these methods in the context of meta-learning and show that they give rise to significant improvements in model performance.

Cite

Text

Chao et al. "A Meta Understanding of Meta-Learning." ICML 2019 Workshops: AMTL, 2019.

Markdown

[Chao et al. "A Meta Understanding of Meta-Learning." ICML 2019 Workshops: AMTL, 2019.](https://mlanthology.org/icmlw/2019/chao2019icmlw-meta/)

BibTeX

@inproceedings{chao2019icmlw-meta,
  title     = {{A Meta Understanding of Meta-Learning}},
  author    = {Chao, Wei-Lun and Ye, Han-Jia and Zhan, De-Chuan and Campbell, Mark and Weinberger, Kilian Q.},
  booktitle = {ICML 2019 Workshops: AMTL},
  year      = {2019},
  url       = {https://mlanthology.org/icmlw/2019/chao2019icmlw-meta/}
}