Meta-Learning from Sparse Recovery

Abstract

Meta-learning aims to train a model on various tasks so that given sample data from a task, even if unforeseen, it can adapt fast and perform well. We apply techniques from compressed sensing to shed light on the effect of inner-loop regularization in meta-learning, with an algorithm that minimizes cross-task interference without compromising weight-sharing. In our algorithm, which is representative of numerous similar variations, the model is explicitly trained such that upon adding a pertinent sparse output layer, it can perform well on a new task with very few number of updates, where cross-task interference is minimized by the sparse recovery of output layer. We demonstrate that this approach produces good results on few-shot regression, classification and reinforcement learning, with several benefits in terms of training efficiency, stability and generalization.

Cite

Text

Lou et al. "Meta-Learning from Sparse Recovery." NeurIPS 2021 Workshops: MetaLearn, 2021.

Markdown

[Lou et al. "Meta-Learning from Sparse Recovery." NeurIPS 2021 Workshops: MetaLearn, 2021.](https://mlanthology.org/neuripsw/2021/lou2021neuripsw-metalearning/)

BibTeX

@inproceedings{lou2021neuripsw-metalearning,
  title     = {{Meta-Learning from Sparse Recovery}},
  author    = {Lou, Beicheng and Zhao, Nathan and Wang, Jiahui},
  booktitle = {NeurIPS 2021 Workshops: MetaLearn},
  year      = {2021},
  url       = {https://mlanthology.org/neuripsw/2021/lou2021neuripsw-metalearning/}
}