Optimal Sparse Linear Encoders and Sparse PCA

Abstract

Principal components analysis~(PCA) is the optimal linear encoder of data. Sparse linear encoders (e.g., sparse PCA) produce more interpretable features that can promote better generalization. (\rn{1}) Given a level of sparsity, what is the best approximation to PCA? (\rn{2}) Are there efficient algorithms which can achieve this optimal combinatorial tradeoff? We answer both questions by providing the first polynomial-time algorithms to construct \emph{optimal} sparse linear auto-encoders; additionally, we demonstrate the performance of our algorithms on real data.

Cite

Text

Magdon-Ismail and Boutsidis. "Optimal Sparse Linear Encoders and Sparse PCA." Neural Information Processing Systems, 2016.

Markdown

[Magdon-Ismail and Boutsidis. "Optimal Sparse Linear Encoders and Sparse PCA." Neural Information Processing Systems, 2016.](https://mlanthology.org/neurips/2016/magdonismail2016neurips-optimal/)

BibTeX

@inproceedings{magdonismail2016neurips-optimal,
  title     = {{Optimal Sparse Linear Encoders and Sparse PCA}},
  author    = {Magdon-Ismail, Malik and Boutsidis, Christos},
  booktitle = {Neural Information Processing Systems},
  year      = {2016},
  pages     = {298-306},
  url       = {https://mlanthology.org/neurips/2016/magdonismail2016neurips-optimal/}
}