Learning-Based Low-Rank Approximations
Abstract
We introduce a “learning-based” algorithm for the low-rank decomposition problem: given an $n \times d$ matrix $A$, and a parameter $k$, compute a rank-$k$ matrix $A'$ that minimizes the approximation loss $||A- A'||_F$. The algorithm uses a training set of input matrices in order to optimize its performance. Specifically, some of the most efficient approximate algorithms for computing low-rank approximations proceed by computing a projection $SA$, where $S$ is a sparse random $m \times n$ “sketching matrix”, and then performing the singular value decomposition of $SA$. We show how to replace the random matrix $S$ with a “learned” matrix of the same sparsity to reduce the error. Our experiments show that, for multiple types of data sets, a learned sketch matrix can substantially reduce the approximation loss compared to a random matrix $S$, sometimes by one order of magnitude. We also study mixed matrices where only some of the rows are trained and the remaining ones are random, and show that matrices still offer improved performance while retaining worst-case guarantees.
Cite
Text
Indyk et al. "Learning-Based Low-Rank Approximations." NeurIPS 2019 Workshops: Deep_Inverse, 2019.Markdown
[Indyk et al. "Learning-Based Low-Rank Approximations." NeurIPS 2019 Workshops: Deep_Inverse, 2019.](https://mlanthology.org/neuripsw/2019/indyk2019neuripsw-learningbased/)BibTeX
@inproceedings{indyk2019neuripsw-learningbased,
title = {{Learning-Based Low-Rank Approximations}},
author = {Indyk, Piotr and Vakilian, Ali and Yuan, Yang},
booktitle = {NeurIPS 2019 Workshops: Deep_Inverse},
year = {2019},
url = {https://mlanthology.org/neuripsw/2019/indyk2019neuripsw-learningbased/}
}