Bayesian Optimization of Robustness Measures Under Input Uncertainty: A Randomized Gaussian Process Upper Confidence Bound Approach
Abstract
Bayesian optimization based on the Gaussian process upper confidence bound (GP-UCB) offers a theoretical guarantee for optimizing black-box functions. In practice, however, black-box functions often involve input uncertainty. To handle such cases, GP-UCB can be extended to optimize evaluation criteria known as robustness measures. However, GP-UCB-based methods for robustness measures require a trade-off parameter, $\beta$, which, as in the original GP-UCB, must be set sufficiently large to ensure theoretical validity. In this study, we propose randomized robustness measure GP-UCB (RRGP-UCB), a novel method that samples $\beta$ from a chi-squared-based probability distribution. This approach eliminates the need to explicitly specify $\beta$. Notably, the expected value of $\beta$ under this distribution is not excessively large. Furthermore, we show that RRGP-UCB provides tight bounds on the expected regret between the optimal and estimated solutions. Numerical experiments demonstrate the effectiveness of the proposed method.
Cite
Text
Inatsu. "Bayesian Optimization of Robustness Measures Under Input Uncertainty: A Randomized Gaussian Process Upper Confidence Bound Approach." Transactions on Machine Learning Research, 2025.Markdown
[Inatsu. "Bayesian Optimization of Robustness Measures Under Input Uncertainty: A Randomized Gaussian Process Upper Confidence Bound Approach." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/inatsu2025tmlr-bayesian/)BibTeX
@article{inatsu2025tmlr-bayesian,
title = {{Bayesian Optimization of Robustness Measures Under Input Uncertainty: A Randomized Gaussian Process Upper Confidence Bound Approach}},
author = {Inatsu, Yu},
journal = {Transactions on Machine Learning Research},
year = {2025},
url = {https://mlanthology.org/tmlr/2025/inatsu2025tmlr-bayesian/}
}