Derivative Free Optimization via Repeated Classification
Abstract
We develop an algorithm for minimizing a function using $n$ batched function value measurements at each of $T$ rounds by using classifiers to identify a function's sublevel set. We show that sufficiently accurate classifiers can achieve linear convergence rates, and show that the convergence rate is tied to the difficulty of active learning sublevel sets. Further, we show that the bootstrap is a computationally efficient approximation to the necessary classification scheme. The end result is a computationally efficient derivative-free algorithm requiring no tuning that consistently outperforms other approaches on simulations, standard benchmarks, real-world DNA binding optimization, and airfoil design problems whenever batched function queries are natural.
Cite
Text
Hashimoto et al. "Derivative Free Optimization via Repeated Classification." International Conference on Artificial Intelligence and Statistics, 2018.Markdown
[Hashimoto et al. "Derivative Free Optimization via Repeated Classification." International Conference on Artificial Intelligence and Statistics, 2018.](https://mlanthology.org/aistats/2018/hashimoto2018aistats-derivative/)BibTeX
@inproceedings{hashimoto2018aistats-derivative,
title = {{Derivative Free Optimization via Repeated Classification}},
author = {Hashimoto, Tatsunori and Yadlowsky, Steve and Duchi, John C.},
booktitle = {International Conference on Artificial Intelligence and Statistics},
year = {2018},
pages = {2027-2036},
url = {https://mlanthology.org/aistats/2018/hashimoto2018aistats-derivative/}
}