Transfer Learning with Gaussian Processes for Bayesian Optimization
Abstract
Bayesian optimization is a powerful paradigm to optimize black-box functions based on scarce and noisy data. Its data efficiency can be further improved by transfer learning from related tasks. While recent transfer models meta-learn a prior based on large amount of data, in the low-data regime methods that exploit the closed-form posterior of Gaussian processes (GPs) have an advantage. In this setting, several analytically tractable transfer-model posteriors have been proposed, but the relative advantages of these methods are not well understood. In this paper, we provide a unified view on hierarchical GP models for transfer learning, which allows us to analyze the relationship between methods. As part of the analysis, we develop a novel closed-form boosted GP transfer model that fits between existing approaches in terms of complexity. We evaluate the performance of the different approaches in large-scale experiments and highlight strengths and weaknesses of the different transfer-learning methods.
Cite
Text
Tighineanu et al. "Transfer Learning with Gaussian Processes for Bayesian Optimization." Artificial Intelligence and Statistics, 2022.Markdown
[Tighineanu et al. "Transfer Learning with Gaussian Processes for Bayesian Optimization." Artificial Intelligence and Statistics, 2022.](https://mlanthology.org/aistats/2022/tighineanu2022aistats-transfer/)BibTeX
@inproceedings{tighineanu2022aistats-transfer,
title = {{Transfer Learning with Gaussian Processes for Bayesian Optimization}},
author = {Tighineanu, Petru and Skubch, Kathrin and Baireuther, Paul and Reiss, Attila and Berkenkamp, Felix and Vinogradska, Julia},
booktitle = {Artificial Intelligence and Statistics},
year = {2022},
pages = {6152-6181},
volume = {151},
url = {https://mlanthology.org/aistats/2022/tighineanu2022aistats-transfer/}
}