A Note on the Sample Complexity of the Er-SpUD Algorithm by Spielman, Wang and Wright for Exact Recovery of Sparsely Used Dictionaries

Abstract

We consider the problem of recovering an invertible $n \times n$ matrix $A$ and a sparse $n \times p$ random matrix $X$ based on the observation of $Y = AX$ (up to a scaling and permutation of columns of $A$ and rows of $X$). Using only elementary tools from the theory of empirical processes we show that a version of the Er-SpUD algorithm by Spielman, Wang and Wright with high probability recovers $A$ and $X$ exactly, provided that $p \ge Cn\log n$, which is optimal up to the constant $C$.

Cite

Text

Adamczak. "A Note on the Sample Complexity of the Er-SpUD Algorithm by Spielman, Wang and Wright for Exact Recovery of Sparsely Used Dictionaries." Journal of Machine Learning Research, 2016.

Markdown

[Adamczak. "A Note on the Sample Complexity of the Er-SpUD Algorithm by Spielman, Wang and Wright for Exact Recovery of Sparsely Used Dictionaries." Journal of Machine Learning Research, 2016.](https://mlanthology.org/jmlr/2016/adamczak2016jmlr-note/)

BibTeX

@article{adamczak2016jmlr-note,
  title     = {{A Note on the Sample Complexity of the Er-SpUD Algorithm by Spielman, Wang and Wright for Exact Recovery of Sparsely Used Dictionaries}},
  author    = {Adamczak, Radoslaw},
  journal   = {Journal of Machine Learning Research},
  year      = {2016},
  pages     = {1-18},
  volume    = {17},
  url       = {https://mlanthology.org/jmlr/2016/adamczak2016jmlr-note/}
}