Distributed Batch Gaussian Process Optimization
Abstract
This paper presents a novel distributed batch Gaussian process upper confidence bound (DB-GP-UCB) algorithm for performing batch Bayesian optimization (BO) of highly complex, costly-to-evaluate black-box objective functions. In contrast to existing batch BO algorithms, DB-GP-UCB can jointly optimize a batch of inputs (as opposed to selecting the inputs of a batch one at a time) while still preserving scalability in the batch size. To realize this, we generalize GP-UCB to a new batch variant amenable to a Markov approximation, which can then be naturally formulated as a multi-agent distributed constraint optimization problem in order to fully exploit the efficiency of its state-of-the-art solvers for achieving linear time in the batch size. Our DB-GP-UCB algorithm offers practitioners the flexibility to trade off between the approximation quality and time efficiency by varying the Markov order. We provide a theoretical guarantee for the convergence rate of DB-GP-UCB via bounds on its cumulative regret. Empirical evaluation on synthetic benchmark objective functions and a real-world optimization problem shows that DB-GP-UCB outperforms the state-of-the-art batch BO algorithms.
Cite
Text
Daxberger and Low. "Distributed Batch Gaussian Process Optimization." International Conference on Machine Learning, 2017.Markdown
[Daxberger and Low. "Distributed Batch Gaussian Process Optimization." International Conference on Machine Learning, 2017.](https://mlanthology.org/icml/2017/daxberger2017icml-distributed/)BibTeX
@inproceedings{daxberger2017icml-distributed,
title = {{Distributed Batch Gaussian Process Optimization}},
author = {Daxberger, Erik A. and Low, Bryan Kian Hsiang},
booktitle = {International Conference on Machine Learning},
year = {2017},
pages = {951-960},
volume = {70},
url = {https://mlanthology.org/icml/2017/daxberger2017icml-distributed/}
}