Provable Meta-Learning of Linear Representations
Abstract
Meta-learning, or learning-to-learn, seeks to design algorithms that can utilize previous experience to rapidly learn new skills or adapt to new environments. Representation learning—a key tool for performing meta-learning—learns a data representation that can transfer knowledge across multiple tasks, which is essential in regimes where data is scarce. Despite a recent surge of interest in the practice of meta-learning, the theoretical underpinnings of meta-learning algorithms are lacking, especially in the context of learning transferable representations. In this paper, we focus on the problem of multi-task linear regression—in which multiple linear regression models share a common, low-dimensional linear representation. Here, we provide provably fast, sample-efficient algorithms to address the dual challenges of (1) learning a common set of features from multiple, related tasks, and (2) transferring this knowledge to new, unseen tasks. Both are central to the general problem of meta-learning. Finally, we complement these results by providing information-theoretic lower bounds on the sample complexity of learning these linear features.
Cite
Text
Tripuraneni et al. "Provable Meta-Learning of Linear Representations." International Conference on Machine Learning, 2021.Markdown
[Tripuraneni et al. "Provable Meta-Learning of Linear Representations." International Conference on Machine Learning, 2021.](https://mlanthology.org/icml/2021/tripuraneni2021icml-provable/)BibTeX
@inproceedings{tripuraneni2021icml-provable,
title = {{Provable Meta-Learning of Linear Representations}},
author = {Tripuraneni, Nilesh and Jin, Chi and Jordan, Michael},
booktitle = {International Conference on Machine Learning},
year = {2021},
pages = {10434-10443},
volume = {139},
url = {https://mlanthology.org/icml/2021/tripuraneni2021icml-provable/}
}