Acceleration and Implicit Regularization in Gaussian Phase Retrieval

Abstract

We study accelerated optimization methods in the Gaussian phase retrieval problem. In this setting, we prove that gradient methods with Polyak or Nesterov momentum have similar implicit regularization to gradient descent. This implicit regularization ensures that the algorithms remain in a nice region, where the cost function is strongly convex and smooth despite being nonconvex in general. This ensures that these accelerated methods achieve faster rates of convergence than gradient descent. Experimental evidence demonstrates that the accelerated methods converge faster than gradient descent in practice.

Cite

Text

Maunu and Molina-Fructuoso. "Acceleration and Implicit Regularization in Gaussian Phase Retrieval." Artificial Intelligence and Statistics, 2024.

Markdown

[Maunu and Molina-Fructuoso. "Acceleration and Implicit Regularization in Gaussian Phase Retrieval." Artificial Intelligence and Statistics, 2024.](https://mlanthology.org/aistats/2024/maunu2024aistats-acceleration/)

BibTeX

@inproceedings{maunu2024aistats-acceleration,
  title     = {{Acceleration and Implicit Regularization in Gaussian Phase Retrieval}},
  author    = {Maunu, Tyler and Molina-Fructuoso, Martin},
  booktitle = {Artificial Intelligence and Statistics},
  year      = {2024},
  pages     = {4060-4068},
  volume    = {238},
  url       = {https://mlanthology.org/aistats/2024/maunu2024aistats-acceleration/}
}