Conditional Meta-Learning of Linear Representations
Abstract
Standard meta-learning for representation learning aims to find a common representation to be shared across multiple tasks. The effectiveness of these methods is often limited when the nuances of the tasks’ distribution cannot be captured by a single representation. In this work we overcome this issue by inferring a conditioning function, mapping the tasks’ side information (such as the tasks’ training dataset itself) into a representation tailored to the task at hand. We study environments in which our conditional strategy outperforms standard meta-learning, such as those in which tasks can be organized in separate clusters according to the representation they share. We then propose a meta-algorithm capable of leveraging this advantage in practice. In the unconditional setting, our method yields a new estimator enjoying faster learning rates and requiring less hyper-parameters to tune than current state-of-the-art methods. Our results are supported by preliminary experiments.
Cite
Text
Denevi et al. "Conditional Meta-Learning of Linear Representations." Neural Information Processing Systems, 2022.Markdown
[Denevi et al. "Conditional Meta-Learning of Linear Representations." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/denevi2022neurips-conditional/)BibTeX
@inproceedings{denevi2022neurips-conditional,
title = {{Conditional Meta-Learning of Linear Representations}},
author = {Denevi, Giulia and Pontil, Massimiliano and Ciliberto, Carlo},
booktitle = {Neural Information Processing Systems},
year = {2022},
url = {https://mlanthology.org/neurips/2022/denevi2022neurips-conditional/}
}