Near-Optimal Method for Highly Smooth Convex Optimization
Abstract
We propose a near-optimal method for highly smooth convex optimization. More precisely, in the oracle model where one obtains the $p^{th}$ order Taylor expansion of a function at the query point, we propose a method with rate of convergence $\tilde{O}(1/k^{\frac{ 3p +1}{2}})$ after $k$ queries to the oracle for any convex function whose $p^{th}$ order derivative is Lipschitz.
Cite
Text
Bubeck et al. "Near-Optimal Method for Highly Smooth Convex Optimization." Conference on Learning Theory, 2019.Markdown
[Bubeck et al. "Near-Optimal Method for Highly Smooth Convex Optimization." Conference on Learning Theory, 2019.](https://mlanthology.org/colt/2019/bubeck2019colt-nearoptimal/)BibTeX
@inproceedings{bubeck2019colt-nearoptimal,
title = {{Near-Optimal Method for Highly Smooth Convex Optimization}},
author = {Bubeck, Sébastien and Jiang, Qijia and Lee, Yin Tat and Li, Yuanzhi and Sidford, Aaron},
booktitle = {Conference on Learning Theory},
year = {2019},
pages = {492-507},
volume = {99},
url = {https://mlanthology.org/colt/2019/bubeck2019colt-nearoptimal/}
}