Tight Convergence Rate of Gradient Descent for Eigenvalue Computation

Abstract

Riemannian gradient descent (RGD) is a simple, popular and efficient algorithm for leading eigenvector computation [AMS08]. However, the existing analysis of RGD for eigenproblem is still not tight, which is O(log(n/epsilon)/Delta^2) due to [Xu et al., 2018]. In this paper, we show that RGD in fact converges at rate O(log(n/epsilon)/Delta), and give instances to shows the tightness of our result. This improves the best prior analysis by a quadratic factor. Besides, we also give tight convergence analysis of a deterministic variant of Oja's rule due to [Oja, 1982]. We show that it also enjoys fast convergence rate of O(log(n/epsilon)/Delta). Previous papers only gave asymptotic characterizations [Oja, 1982; Oja, 1989; Yi et al., 2005]. Our tools for proving convergence results include an innovative reduction and chaining technique, and a noisy fixed point iteration argument. Besides, we also give empirical justifications of our convergence rates over synthetic and real data.

Cite

Text

Ding et al. "Tight Convergence Rate of Gradient Descent for Eigenvalue Computation." International Joint Conference on Artificial Intelligence, 2020. doi:10.24963/IJCAI.2020/453

Markdown

[Ding et al. "Tight Convergence Rate of Gradient Descent for Eigenvalue Computation." International Joint Conference on Artificial Intelligence, 2020.](https://mlanthology.org/ijcai/2020/ding2020ijcai-tight/) doi:10.24963/IJCAI.2020/453

BibTeX

@inproceedings{ding2020ijcai-tight,
  title     = {{Tight Convergence Rate of Gradient Descent for Eigenvalue Computation}},
  author    = {Ding, Qinghua and Zhou, Kaiwen and Cheng, James},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2020},
  pages     = {3276-3282},
  doi       = {10.24963/IJCAI.2020/453},
  url       = {https://mlanthology.org/ijcai/2020/ding2020ijcai-tight/}
}