A Rank-1 Sketch for Matrix Multiplicative Weights

Abstract

We show that a simple randomized sketch of the matrix multiplicative weight (MMW) update enjoys (in expectation) the same regret bounds as MMW, up to a small constant factor. Unlike MMW, where every step requires full matrix exponentiation, our steps require only a single product of the form $e^A b$, which the Lanczos method approximates efficiently. Our key technique is to view the sketch as a \emph{randomized mirror projection}, and perform mirror descent analysis on the \emph{expected projection}. Our sketch solves the online eigenvector problem, improving the best known complexity bounds by $\Omega(\log^5 n)$. We also apply this sketch to semidefinite programming in saddle-point form, yielding a simple primal-dual scheme with guarantees matching the best in the literature.

Cite

Text

Carmon et al. "A Rank-1 Sketch for Matrix Multiplicative Weights." Conference on Learning Theory, 2019.

Markdown

[Carmon et al. "A Rank-1 Sketch for Matrix Multiplicative Weights." Conference on Learning Theory, 2019.](https://mlanthology.org/colt/2019/carmon2019colt-rank1/)

BibTeX

@inproceedings{carmon2019colt-rank1,
  title     = {{A Rank-1 Sketch for Matrix Multiplicative Weights}},
  author    = {Carmon, Yair and Duchi, John C and Aaron, Sidford and Kevin, Tian},
  booktitle = {Conference on Learning Theory},
  year      = {2019},
  pages     = {589-623},
  volume    = {99},
  url       = {https://mlanthology.org/colt/2019/carmon2019colt-rank1/}
}