Accelerated and Sparse Algorithms for Approximate Personalized PageRank and Beyond

Abstract

It has recently been shown that ISTA, an unaccelerated optimization method, presents sparse updates for the $\ell_1$-regularized undirected personalized PageRank problem (Fountoulakis et al., 2019), leading to cheap iteration complexity and providing the same guarantees as the approximate personalized PageRank algorithm (\appr{}), (Andersen et al., 2016). In this work, we design an accelerated optimization algorithm for this problem that also performs sparse updates, providing an affirmative answer to the COLT 2022 open question of (Fountoulakis et al., 2022). Acceleration provides a reduced dependence on the condition number, while the dependence on the sparsity in our updates differs from the ISTA approach. Further, we design another algorithm by using conjugate directions to achieve an exact solution while exploiting sparsity. Both algorithms lead to faster convergence for certain parameter regimes. Our findings apply beyond PageRank and work for any quadratic objective whose Hessian is a positive-definite $M$-matrix.

Cite

Text

Martínez-Rubio et al. "Accelerated and Sparse Algorithms for Approximate Personalized PageRank and Beyond." Conference on Learning Theory, 2023.

Markdown

[Martínez-Rubio et al. "Accelerated and Sparse Algorithms for Approximate Personalized PageRank and Beyond." Conference on Learning Theory, 2023.](https://mlanthology.org/colt/2023/martinezrubio2023colt-accelerated-a/)

BibTeX

@inproceedings{martinezrubio2023colt-accelerated-a,
  title     = {{Accelerated and Sparse Algorithms for Approximate Personalized PageRank and Beyond}},
  author    = {Martínez-Rubio, David and Wirth, Elias and Pokutta, Sebastian},
  booktitle = {Conference on Learning Theory},
  year      = {2023},
  pages     = {2852-2876},
  volume    = {195},
  url       = {https://mlanthology.org/colt/2023/martinezrubio2023colt-accelerated-a/}
}