Meta-Learning with Neural Tangent Kernels
Abstract
Model Agnostic Meta-Learning (MAML) has emerged as a standard framework for meta-learning, where a meta-model is learned with the ability of fast adapting to new tasks. However, as a double-looped optimization problem, MAML needs to differentiate through the whole inner-loop optimization path for every outer-loop training step, which may lead to both computational inefficiency and sub-optimal solutions. In this paper, we generalize MAML to allow meta-learning to be defined in function spaces, and propose the first meta-learning paradigm in the Reproducing Kernel Hilbert Space (RKHS) induced by the meta-model's Neural Tangent Kernel (NTK). Within this paradigm, we introduce two meta-learning algorithms in the RKHS, which no longer need a sub-optimal iterative inner-loop adaptation as in the MAML framework. We achieve this goal by 1) replacing the adaptation with a fast-adaptive regularizer in the RKHS; and 2) solving the adaptation analytically based on the NTK theory. Extensive experimental studies demonstrate advantages of our paradigm in both efficiency and quality of solutions compared to related meta-learning algorithms. Another interesting feature of our proposed methods is that they are demonstrated to be more robust to adversarial attacks and out-of-distribution adaptation than popular baselines, as demonstrated in our experiments.
Cite
Text
Zhou et al. "Meta-Learning with Neural Tangent Kernels." International Conference on Learning Representations, 2021.Markdown
[Zhou et al. "Meta-Learning with Neural Tangent Kernels." International Conference on Learning Representations, 2021.](https://mlanthology.org/iclr/2021/zhou2021iclr-metalearning/)BibTeX
@inproceedings{zhou2021iclr-metalearning,
title = {{Meta-Learning with Neural Tangent Kernels}},
author = {Zhou, Yufan and Wang, Zhenyi and Xian, Jiayi and Chen, Changyou and Xu, Jinhui},
booktitle = {International Conference on Learning Representations},
year = {2021},
url = {https://mlanthology.org/iclr/2021/zhou2021iclr-metalearning/}
}