Implicit Kernel Meta-Learning Using Kernel Integral Forms
Abstract
Meta-learning algorithms have made significant progress in the context of meta-learning for image classification but less attention has been given to the regression setting. In this paper we propose to learn the probability distribution representing a random feature kernel that we wish to use within kernel ridge regression (KRR). We introduce two instances of this meta-learning framework, learning a neural network pushforward for a translation-invariant kernel and an affine pushforward for a neural network random feature kernel, both mapping from a Gaussian latent distribution. We learn the parameters of the pushforward by minimizing a meta-loss associated to the KRR objective. Since the resulting kernel does not admit an analytical form, we adopt a random feature sampling approach to approximate it. We call the resulting method Implicit Kernel Meta-Learning (IKML). We derive a meta-learning bound for IKML, which shows the role played by the number of tasks $T$, the task sample size $n$, and the number of random features $M$. In particular the bound implies that $M$ can be the chosen independently of $T$ and only mildly dependent on $n$. We introduce one synthetic and two real-world meta-learning regression benchmark datasets. Experiments on these datasets show that IKML
Cite
Text
Falk et al. "Implicit Kernel Meta-Learning Using Kernel Integral Forms." Uncertainty in Artificial Intelligence, 2022.Markdown
[Falk et al. "Implicit Kernel Meta-Learning Using Kernel Integral Forms." Uncertainty in Artificial Intelligence, 2022.](https://mlanthology.org/uai/2022/falk2022uai-implicit/)BibTeX
@inproceedings{falk2022uai-implicit,
title = {{Implicit Kernel Meta-Learning Using Kernel Integral Forms}},
author = {Falk, John Isak Texas and Cilibert, Carlo and Pontil, Massimiliano},
booktitle = {Uncertainty in Artificial Intelligence},
year = {2022},
pages = {652-662},
volume = {180},
url = {https://mlanthology.org/uai/2022/falk2022uai-implicit/}
}