Regret Bounds for Transfer Learning in Bayesian Optimisation

Abstract

This paper studies the regret bound of two transfer learning algorithms in Bayesian optimisation. The first algorithm models any difference between the source and target functions as a noise process. The second algorithm proposes a new way to model the difference between the source and target as a Gaussian process which is then used to adapt the source data. We show that in both cases the regret bounds are tighter than in the no transfer case. We also experimentally compare the performance of these algorithms relative to no transfer learning and demonstrate benefits of transfer learning.

Cite

Text

Shilton et al. "Regret Bounds for Transfer Learning in Bayesian Optimisation." International Conference on Artificial Intelligence and Statistics, 2017.

Markdown

[Shilton et al. "Regret Bounds for Transfer Learning in Bayesian Optimisation." International Conference on Artificial Intelligence and Statistics, 2017.](https://mlanthology.org/aistats/2017/shilton2017aistats-regret/)

BibTeX

@inproceedings{shilton2017aistats-regret,
  title     = {{Regret Bounds for Transfer Learning in Bayesian Optimisation}},
  author    = {Shilton, Alistair and Gupta, Sunil and Rana, Santu and Venkatesh, Svetha},
  booktitle = {International Conference on Artificial Intelligence and Statistics},
  year      = {2017},
  pages     = {307-315},
  url       = {https://mlanthology.org/aistats/2017/shilton2017aistats-regret/}
}