Information-Theoretic Lower Bounds for Convex Optimization with Erroneous Oracles
Abstract
We consider the problem of optimizing convex and concave functions with access to an erroneous zeroth-order oracle. In particular, for a given function $x \to f(x)$ we consider optimization when one is given access to absolute error oracles that return values in [f(x) - \epsilon,f(x)+\epsilon] or relative error oracles that return value in [(1+\epsilon)f(x), (1 +\epsilon)f (x)], for some \epsilon larger than 0. We show stark information theoretic impossibility results for minimizing convex functions and maximizing concave functions over polytopes in this model.
Cite
Text
Singer and Vondrak. "Information-Theoretic Lower Bounds for Convex Optimization with Erroneous Oracles." Neural Information Processing Systems, 2015.Markdown
[Singer and Vondrak. "Information-Theoretic Lower Bounds for Convex Optimization with Erroneous Oracles." Neural Information Processing Systems, 2015.](https://mlanthology.org/neurips/2015/singer2015neurips-informationtheoretic/)BibTeX
@inproceedings{singer2015neurips-informationtheoretic,
title = {{Information-Theoretic Lower Bounds for Convex Optimization with Erroneous Oracles}},
author = {Singer, Yaron and Vondrak, Jan},
booktitle = {Neural Information Processing Systems},
year = {2015},
pages = {3204-3212},
url = {https://mlanthology.org/neurips/2015/singer2015neurips-informationtheoretic/}
}