Lower Bounds for Higher-Order Convex Optimization
Abstract
State-of-the-art methods in convex and non-convex optimization employ higher-order derivative information, either implicitly or explicitly. We explore the limitations of higher-order optimization and prove that even for convex optimization, a polynomial dependence on the approximation guarantee and higher-order smoothness parameters is necessary. As a special case, we show Nesterov's accelerated cubic regularization method to be nearly tight.
Cite
Text
Agarwal and Hazan. "Lower Bounds for Higher-Order Convex Optimization." Annual Conference on Computational Learning Theory, 2018.Markdown
[Agarwal and Hazan. "Lower Bounds for Higher-Order Convex Optimization." Annual Conference on Computational Learning Theory, 2018.](https://mlanthology.org/colt/2018/agarwal2018colt-lower/)BibTeX
@inproceedings{agarwal2018colt-lower,
title = {{Lower Bounds for Higher-Order Convex Optimization}},
author = {Agarwal, Naman and Hazan, Elad},
booktitle = {Annual Conference on Computational Learning Theory},
year = {2018},
pages = {774-792},
url = {https://mlanthology.org/colt/2018/agarwal2018colt-lower/}
}