Sparse Higher-Order Principal Components Analysis

Abstract

Traditional tensor decompositions such as the CANDECOMP / PARAFAC (CP) and Tucker decompositions yield higher-order principal components that have been used to understand tensor data in areas such as neuroimaging, microscopy, chemometrics, and remote sensing. Sparsity in high-dimensional matrix factorizations and principal components has been well-studied exhibiting many benefits; less attention has been given to sparsity in tensor decompositions. We propose two novel tensor decompositions that incorporate sparsity: the Sparse Higher-Order SVD and the Sparse CP Decomposition. The latter solves a 1-norm penalized relaxation of the single-factor CP optimization problem, thereby automatically selecting relevant features for each tensor factor. Through experiments and a scientific data analysis example, we demonstrate the utility of our methods for dimension reduction, feature selection, signal recovery, and exploratory data analysis of high-dimensional tensors.

Cite

Text

Allen. "Sparse Higher-Order Principal Components Analysis." Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics, 2012.

Markdown

[Allen. "Sparse Higher-Order Principal Components Analysis." Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics, 2012.](https://mlanthology.org/aistats/2012/allen2012aistats-sparse/)

BibTeX

@inproceedings{allen2012aistats-sparse,
  title     = {{Sparse Higher-Order Principal Components Analysis}},
  author    = {Allen, Genevera},
  booktitle = {Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics},
  year      = {2012},
  pages     = {27-36},
  volume    = {22},
  url       = {https://mlanthology.org/aistats/2012/allen2012aistats-sparse/}
}