Sparse Submodular Probabilistic PCA

Abstract

We propose a novel approach for sparse probabilistic principal component analysis, that combines a low rank representation for the latent factors and loadings with a novel sparse variational inference approach for estimating distributions of latent variables subject to sparse support constraints. Inference and parameter estimation for the resulting model is achieved via expectation maximization with a novel variational inference method for the E-step that induces sparsity. We show that this inference problem can be reduced to discrete optimal support selection. The discrete optimization is submodular, hence, greedy selection is guaranteed to achieve 1-1/e fraction of the optimal. Empirical studies indicate effectiveness of the proposed approach for the recovery of a parsimonious decomposition as compared to established baseline methods. We also evaluate our method against state-of-the-art methods on high dimensional fMRI data, and show that the method performs as good as or better than other methods.

Cite

Text

Khanna et al. "Sparse Submodular Probabilistic PCA." International Conference on Artificial Intelligence and Statistics, 2015.

Markdown

[Khanna et al. "Sparse Submodular Probabilistic PCA." International Conference on Artificial Intelligence and Statistics, 2015.](https://mlanthology.org/aistats/2015/khanna2015aistats-sparse/)

BibTeX

@inproceedings{khanna2015aistats-sparse,
  title     = {{Sparse Submodular Probabilistic PCA}},
  author    = {Khanna, Rajiv and Ghosh, Joydeep and Poldrack, Russell A. and Koyejo, Oluwasanmi},
  booktitle = {International Conference on Artificial Intelligence and Statistics},
  year      = {2015},
  url       = {https://mlanthology.org/aistats/2015/khanna2015aistats-sparse/}
}