Accelerated Distance-Adaptive Methods for Hölder Smooth and Convex Optimization
Abstract
This paper introduces new parameter-free first-order methods for convex optimization problems in which the objective function exhibits Hölder smoothness. Inspired by the recently proposed distance-over-gradient (DOG) technique, we propose an accelerated distance-adaptive method which achieves optimal anytime convergence rates for Hölder smooth problems without requiring prior knowledge of smoothness parameters or explicit parameter tuning. Importantly, our parameter-free approach removes the necessity of specifying target accuracy in advance, addressing a significant limitation found in the universal fast gradient methods(Nesterov,2015). We further present a parameter-free accelerated method that eliminates the need for line-search procedures and extend it to convex stochastic optimization. Preliminary experimental results highlight the effectiveness of our approach in convex nonsmooth problems and its advantages over existing parameter-free or accelerated methods.
Cite
Text
Ren et al. "Accelerated Distance-Adaptive Methods for Hölder Smooth and Convex Optimization." Advances in Neural Information Processing Systems, 2025.Markdown
[Ren et al. "Accelerated Distance-Adaptive Methods for Hölder Smooth and Convex Optimization." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/ren2025neurips-accelerated/)BibTeX
@inproceedings{ren2025neurips-accelerated,
title = {{Accelerated Distance-Adaptive Methods for Hölder Smooth and Convex Optimization}},
author = {Ren, Yijin and Xu, Haifeng and Deng, Qi},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/ren2025neurips-accelerated/}
}