Gradient Descent Algorithms for Bures-Wasserstein Barycenters

Abstract

We study first order methods to compute the barycenter of a probability distribution $P$ over the space of probability measures with finite second moment. We develop a framework to derive global rates of convergence for both gradient descent and stochastic gradient descent despite the fact that the barycenter functional is not geodesically convex. Our analysis overcomes this technical hurdle by employing a Polyak-Łojasiewicz (PL) inequality and relies on tools from optimal transport and metric geometry. In turn, we establish a PL inequality when $P$ is supported on the Bures-Wasserstein manifold of Gaussian probability measures. It leads to the first global rates of convergence for first order methods in this context.

Cite

Text

Chewi et al. "Gradient Descent Algorithms for Bures-Wasserstein Barycenters." Conference on Learning Theory, 2020.

Markdown

[Chewi et al. "Gradient Descent Algorithms for Bures-Wasserstein Barycenters." Conference on Learning Theory, 2020.](https://mlanthology.org/colt/2020/chewi2020colt-gradient/)

BibTeX

@inproceedings{chewi2020colt-gradient,
  title     = {{Gradient Descent Algorithms for Bures-Wasserstein Barycenters}},
  author    = {Chewi, Sinho and Maunu, Tyler and Rigollet, Philippe and Stromme, Austin J.},
  booktitle = {Conference on Learning Theory},
  year      = {2020},
  pages     = {1276-1304},
  volume    = {125},
  url       = {https://mlanthology.org/colt/2020/chewi2020colt-gradient/}
}