DADA: Dual Averaging with Distance Adaptation

Abstract

We present a novel parameter-free universal gradient method for solving convex optimization prob- lems. Our algorithm—Dual Averaging with Distance Adaptation (DADA)–is based on the classical scheme of dual averaging and dynamically adjusts its coefficients based on the observed gradi- ents and the distance between its iterates to the starting point, without the need for knowing any problem-specific parameters. DADA is a universal algorithm that simultaneously works for a wide range of problem classes as long as one is able to bound the local growth of the objective around its minimizer. Particular examples of such problem classes are nonsmooth Lipschitz functions, Lipschitz-smooth functions, H¨older-smooth functions, functions with high-order Lipschitz deriva- tive, quasi-self-concordant functions, and (L0, L1)-smooth functions. Furthermore, in contrast to many existing methods, DADA is suitable not only for unconstrained problems, but also con- strained ones, possibly with unbounded domain, and it does not require fixing neither the number of iterations nor the accuracy in advance.

Cite

Text

Moshtaghifar et al. "DADA: Dual Averaging with Distance Adaptation." NeurIPS 2024 Workshops: OPT, 2024.

Markdown

[Moshtaghifar et al. "DADA: Dual Averaging with Distance Adaptation." NeurIPS 2024 Workshops: OPT, 2024.](https://mlanthology.org/neuripsw/2024/moshtaghifar2024neuripsw-dada/)

BibTeX

@inproceedings{moshtaghifar2024neuripsw-dada,
  title     = {{DADA: Dual Averaging with Distance Adaptation}},
  author    = {Moshtaghifar, Mohammad and Rodomanov, Anton and Vankov, Daniil and Stich, Sebastian U},
  booktitle = {NeurIPS 2024 Workshops: OPT},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/moshtaghifar2024neuripsw-dada/}
}