Accelerating Optimization over the Space of Probability Measures

Abstract

The acceleration of gradient-based optimization methods is a subject of significant practical and theoretical importance, particularly within machine learning applications. While much attention has been directed towards optimizing within Euclidean space, the need to optimize over spaces of probability measures in machine learning motivates the exploration of accelerated gradient methods in this context, too. To this end, we introduce a Hamiltonian-flow approach analogous to momentum-based approaches in Euclidean space. We demonstrate that, in the continuous-time setting, algorithms based on this approach can achieve convergence rates of arbitrarily high order. We complement our findings with numerical examples.

Cite

Text

Chen et al. "Accelerating Optimization over the Space of Probability Measures." Journal of Machine Learning Research, 2025.

Markdown

[Chen et al. "Accelerating Optimization over the Space of Probability Measures." Journal of Machine Learning Research, 2025.](https://mlanthology.org/jmlr/2025/chen2025jmlr-accelerating/)

BibTeX

@article{chen2025jmlr-accelerating,
  title     = {{Accelerating Optimization over the Space of Probability Measures}},
  author    = {Chen, Shi and Li, Qin and Tse, Oliver and Wright, Stephen J.},
  journal   = {Journal of Machine Learning Research},
  year      = {2025},
  pages     = {1-40},
  volume    = {26},
  url       = {https://mlanthology.org/jmlr/2025/chen2025jmlr-accelerating/}
}