Scalable Meta-Learning with Gaussian Processes
Abstract
Meta-learning is a powerful approach that exploits historical data to quickly solve new tasks from the same distribution. In the low-data regime, methods based on the closed-form posterior of Gaussian processes (GP) together with Bayesian optimization have achieved high performance. However, these methods are either computationally expensive or introduce assumptions that hinder a principled propagation of uncertainty between task models. This may disrupt the balance between exploration and exploitation during optimization. In this paper, we develop ScaML-GP, a modular GP model for meta-learning that is scalable in the number of tasks. Our core contribution is carefully designed multi-task kernel that enables hierarchical training and task scalability. Conditioning ScaML-GP on the meta-data exposes its modular nature yielding a test-task prior that combines the posteriors of meta-task GPs. In synthetic and real-world meta-learning experiments, we demonstrate that ScaML-GP can learn efficiently both with few and many meta-tasks.
Cite
Text
Tighineanu et al. "Scalable Meta-Learning with Gaussian Processes." Artificial Intelligence and Statistics, 2024.Markdown
[Tighineanu et al. "Scalable Meta-Learning with Gaussian Processes." Artificial Intelligence and Statistics, 2024.](https://mlanthology.org/aistats/2024/tighineanu2024aistats-scalable/)BibTeX
@inproceedings{tighineanu2024aistats-scalable,
title = {{Scalable Meta-Learning with Gaussian Processes}},
author = {Tighineanu, Petru and Grossberger, Lukas and Baireuther, Paul and Skubch, Kathrin and Falkner, Stefan and Vinogradska, Julia and Berkenkamp, Felix},
booktitle = {Artificial Intelligence and Statistics},
year = {2024},
pages = {1981-1989},
volume = {238},
url = {https://mlanthology.org/aistats/2024/tighineanu2024aistats-scalable/}
}