Simple and Practical Algorithms for 𝓁p-Norm Low-Rank Approximation
Abstract
We propose practical algorithms for entrywise $\ell_p$-norm low-rank approximation, for $p = 1$ or $p = \infty$. The proposed framework, which is non-convex and gradient-based, is easy to implement and typically attains better approximations, faster, than state of the art. From a theoretical standpoint, we show that the proposed scheme can attain $(1 + \varepsilon)$-OPT approximations. Our algorithms are not hyperparameter-free: they achieve the desiderata only assuming algorithm's hyperparameters are known {\em apriori}---or are at least approximable. \emph{I.e.}, our theory indicates what problem quantities need to be known, in order to get a good solution within polynomial time, and does not contradict to recent inapproximabilty results, as in \citep{song2017low}.
Cite
Text
Kyrillidis. "Simple and Practical Algorithms for 𝓁p-Norm Low-Rank Approximation." Conference on Uncertainty in Artificial Intelligence, 2018.Markdown
[Kyrillidis. "Simple and Practical Algorithms for 𝓁p-Norm Low-Rank Approximation." Conference on Uncertainty in Artificial Intelligence, 2018.](https://mlanthology.org/uai/2018/kyrillidis2018uai-simple/)BibTeX
@inproceedings{kyrillidis2018uai-simple,
title = {{Simple and Practical Algorithms for 𝓁p-Norm Low-Rank Approximation}},
author = {Kyrillidis, Anastasios},
booktitle = {Conference on Uncertainty in Artificial Intelligence},
year = {2018},
pages = {414-424},
url = {https://mlanthology.org/uai/2018/kyrillidis2018uai-simple/}
}