Meta-Learning for Mixed Linear Regression

Abstract

In modern supervised learning, there are a large number of tasks, but many of them are associated with only a small amount of labelled data. These include data from medical image processing and robotic interaction. Even though each individual task cannot be meaningfully trained in isolation, one seeks to meta-learn across the tasks from past experiences by exploiting some similarities. We study a fundamental question of interest: When can abundant tasks with small data compensate for lack of tasks with big data? We focus on a canonical scenario where each task is drawn from a mixture of $k$ linear regressions, and identify sufficient conditions for such a graceful exchange to hold; there is little loss in sample complexity even when we only have access to small data tasks. To this end, we introduce a novel spectral approach and show that we can efficiently utilize small data tasks with the help of $\tilde\Omega(k^{3/2})$ medium data tasks each with $\tilde\Omega(k^{1/2})$ examples.

Cite

Text

Kong et al. "Meta-Learning for Mixed Linear Regression." International Conference on Machine Learning, 2020.

Markdown

[Kong et al. "Meta-Learning for Mixed Linear Regression." International Conference on Machine Learning, 2020.](https://mlanthology.org/icml/2020/kong2020icml-metalearning/)

BibTeX

@inproceedings{kong2020icml-metalearning,
  title     = {{Meta-Learning for Mixed Linear Regression}},
  author    = {Kong, Weihao and Somani, Raghav and Song, Zhao and Kakade, Sham and Oh, Sewoong},
  booktitle = {International Conference on Machine Learning},
  year      = {2020},
  pages     = {5394-5404},
  volume    = {119},
  url       = {https://mlanthology.org/icml/2020/kong2020icml-metalearning/}
}