PABBO: Preferential Amortized Black-Box Optimization
Abstract
Preferential Bayesian Optimization (PBO) is a sample-efficient method to learn latent user utilities from preferential feedback over a pair of designs. It relies on a statistical surrogate model for the latent function, usually a Gaussian process, and an acquisition strategy to select the next candidate pair to get user feedback on. Due to the non-conjugacy of the associated likelihood, every PBO step requires a significant amount of computations with various approximate inference techniques. This computational overhead is incompatible with the way humans interact with computers, hindering the use of PBO in real-world cases. Building on the recent advances of amortized BO, we propose to circumvent this issue by fully amortizing PBO, meta-learning both the surrogate and the acquisition function. Our method comprises a novel transformer neural process architecture, trained using reinforcement learning and tailored auxiliary losses. On a benchmark composed of synthetic and real-world datasets, our method is several orders of magnitude faster than the usual Gaussian process-based strategies and often outperforms them in accuracy.
Cite
Text
Zhang et al. "PABBO: Preferential Amortized Black-Box Optimization." International Conference on Learning Representations, 2025.Markdown
[Zhang et al. "PABBO: Preferential Amortized Black-Box Optimization." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/zhang2025iclr-pabbo/)BibTeX
@inproceedings{zhang2025iclr-pabbo,
title = {{PABBO: Preferential Amortized Black-Box Optimization}},
author = {Zhang, Xinyu and Huang, Daolang and Kaski, Samuel and Martinelli, Julien},
booktitle = {International Conference on Learning Representations},
year = {2025},
url = {https://mlanthology.org/iclr/2025/zhang2025iclr-pabbo/}
}