BOFormer: Learning to Solve Multi-Objective Bayesian Optimization via Non-Markovian RL
Abstract
Bayesian optimization (BO) offers an efficient pipeline for optimizing black-box functions with the help of a Gaussian process prior and an acquisition function (AF). Recently, in the context of single-objective BO, learning-based AFs witnessed promising empirical results given its favorable non-myopic nature. Despite this, the direct extension of these approaches to multi-objective Bayesian optimization (MOBO) suffer from the *hypervolume identifiability issue*, which results from the non-Markovian nature of MOBO problems. To tackle this, inspired by the non-Markovian RL literature and the success of Transformers in language modeling, we present a generalized deep Q-learning framework and propose *BOFormer*, which substantiates this framework for MOBO via sequence modeling. Through extensive evaluation, we demonstrate that BOFormer constantly achieves better performance than the benchmark rule-based and learning-based algorithms in various synthetic MOBO and real-world multi-objective hyperparameter optimization problems.
Cite
Text
Hung et al. "BOFormer: Learning to Solve Multi-Objective Bayesian Optimization via Non-Markovian RL." ICML 2024 Workshops: AutoRL, 2024.Markdown
[Hung et al. "BOFormer: Learning to Solve Multi-Objective Bayesian Optimization via Non-Markovian RL." ICML 2024 Workshops: AutoRL, 2024.](https://mlanthology.org/icmlw/2024/hung2024icmlw-boformer/)BibTeX
@inproceedings{hung2024icmlw-boformer,
title = {{BOFormer: Learning to Solve Multi-Objective Bayesian Optimization via Non-Markovian RL}},
author = {Hung, Yu Heng and Lin, Kai-Jie and Lin, Yu-Heng and Wang, Chien-Yi and Hsieh, Ping-Chun},
booktitle = {ICML 2024 Workshops: AutoRL},
year = {2024},
url = {https://mlanthology.org/icmlw/2024/hung2024icmlw-boformer/}
}