Modular Meta-Learning with Shrinkage
Abstract
Many real-world problems, including multi-speaker text-to-speech synthesis, can greatly benefit from the ability to meta-learn large models with only a few task- specific components. Updating only these task-specific modules then allows the model to be adapted to low-data tasks for as many steps as necessary without risking overfitting. Unfortunately, existing meta-learning methods either do not scale to long adaptation or else rely on handcrafted task-specific architectures. Here, we propose a meta-learning approach that obviates the need for this often sub-optimal hand-selection. In particular, we develop general techniques based on Bayesian shrinkage to automatically discover and learn both task-specific and general reusable modules. Empirically, we demonstrate that our method discovers a small set of meaningful task-specific modules and outperforms existing meta- learning approaches in domains like few-shot text-to-speech that have little task data and long adaptation horizons. We also show that existing meta-learning methods including MAML, iMAML, and Reptile emerge as special cases of our method.
Cite
Text
Chen et al. "Modular Meta-Learning with Shrinkage." Neural Information Processing Systems, 2020.Markdown
[Chen et al. "Modular Meta-Learning with Shrinkage." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/chen2020neurips-modular/)BibTeX
@inproceedings{chen2020neurips-modular,
title = {{Modular Meta-Learning with Shrinkage}},
author = {Chen, Yutian and Friesen, Abram L. and Behbahani, Feryal and Doucet, Arnaud and Budden, David and Hoffman, Matthew and de Freitas, Nando},
booktitle = {Neural Information Processing Systems},
year = {2020},
url = {https://mlanthology.org/neurips/2020/chen2020neurips-modular/}
}