Large-Scale Sparse Principal Component Analysis with Application to Text Data
Abstract
Sparse PCA provides a linear combination of small number of features that maximizes variance across data. Although Sparse PCA has apparent advantages compared to PCA, such as better interpretability, it is generally thought to be computationally much more expensive. In this paper, we demonstrate the surprising fact that sparse PCA can be easier than PCA in practice, and that it can be reliably applied to very large data sets. This comes from a rigorous feature elimination pre-processing result, coupled with the favorable fact that features in real-life data typically have exponentially decreasing variances, which allows for many features to be eliminated. We introduce a fast block coordinate ascent algorithm with much better computational complexity than the existing first-order ones. We provide experimental results obtained on text corpora involving millions of documents and hundreds of thousands of features. These results illustrate how Sparse PCA can help organize a large corpus of text data in a user-interpretable way, providing an attractive alternative approach to topic models.
Cite
Text
Zhang and Ghaoui. "Large-Scale Sparse Principal Component Analysis with Application to Text Data." Neural Information Processing Systems, 2011.Markdown
[Zhang and Ghaoui. "Large-Scale Sparse Principal Component Analysis with Application to Text Data." Neural Information Processing Systems, 2011.](https://mlanthology.org/neurips/2011/zhang2011neurips-largescale/)BibTeX
@inproceedings{zhang2011neurips-largescale,
title = {{Large-Scale Sparse Principal Component Analysis with Application to Text Data}},
author = {Zhang, Youwei and Ghaoui, Laurent E.},
booktitle = {Neural Information Processing Systems},
year = {2011},
pages = {532-539},
url = {https://mlanthology.org/neurips/2011/zhang2011neurips-largescale/}
}