Risk-Controlling Model Selection via Guided Bayesian Optimization
Abstract
Adjustable hyperparameters of machine learning models typically impact various key trade-offs such as accuracy, fairness, robustness, or inference cost. Our goal in this paper is to find a configuration that adheres to user-specified limits on certain risks while being useful with respect to other conflicting metrics. We solve this by combining Bayesian Optimization (BO) with rigorous risk-controlling procedures, where our core idea is to steer BO towards an efficient testing strategy. Our BO method identifies a set of Pareto optimal configurations residing in a designated region of interest. The resulting candidates are statistically verified, and the best-performing configuration is selected with guaranteed risk levels. We demonstrate the effectiveness of our approach on a range of tasks with multiple desiderata, including low error rates, equitable predictions, handling spurious correlations, managing rate and distortion in generative models, and reducing computational costs.
Cite
Text
Laufer-Goldshtein et al. "Risk-Controlling Model Selection via Guided Bayesian Optimization." Transactions on Machine Learning Research, 2024.Markdown
[Laufer-Goldshtein et al. "Risk-Controlling Model Selection via Guided Bayesian Optimization." Transactions on Machine Learning Research, 2024.](https://mlanthology.org/tmlr/2024/laufergoldshtein2024tmlr-riskcontrolling/)BibTeX
@article{laufergoldshtein2024tmlr-riskcontrolling,
title = {{Risk-Controlling Model Selection via Guided Bayesian Optimization}},
author = {Laufer-Goldshtein, Bracha and Fisch, Adam and Barzilay, Regina and Jaakkola, Tommi},
journal = {Transactions on Machine Learning Research},
year = {2024},
url = {https://mlanthology.org/tmlr/2024/laufergoldshtein2024tmlr-riskcontrolling/}
}