Non-Euclidean High-Order Smooth Convex Optimization Extended Abstract

Abstract

We develop algorithms for the optimization of convex objectives that have Hölder continuous $q$-th derivatives by using a $q$-th order oracle, for any $q \geq 1$. Our algorithms work for general norms under mild conditions, including the $\ell_p$-settings for $1\leq p\leq \infty$. We can also optimize structured functions that allow for inexactly implementing a non-Euclidean ball optimization oracle. We do this by developing a non-Euclidean inexact accelerated proximal point method that makes use of an \textit{inexact uniformly convex regularizer}. We show a lower bound for general norms that demonstrates our algorithms are nearly optimal in high-dimensions in the black-box oracle model for $\ell_p$-settings and all $q \geq 1$, even in randomized and parallel settings. This new lower bound, when applied to the first-order smooth case, resolves an open question in parallel convex optimization.

Cite

Text

Contreras et al. "Non-Euclidean High-Order Smooth Convex Optimization Extended Abstract." Proceedings of Thirty Eighth Conference on Learning Theory, 2025.

Markdown

[Contreras et al. "Non-Euclidean High-Order Smooth Convex Optimization Extended Abstract." Proceedings of Thirty Eighth Conference on Learning Theory, 2025.](https://mlanthology.org/colt/2025/contreras2025colt-noneuclidean/)

BibTeX

@inproceedings{contreras2025colt-noneuclidean,
  title     = {{Non-Euclidean High-Order Smooth Convex Optimization Extended Abstract}},
  author    = {Contreras, Juan Pablo and Guzmán, Cristóbal and Martı́nez-Rubio, David},
  booktitle = {Proceedings of Thirty Eighth Conference on Learning Theory},
  year      = {2025},
  pages     = {1330-1330},
  volume    = {291},
  url       = {https://mlanthology.org/colt/2025/contreras2025colt-noneuclidean/}
}