Sparse Additive Matrix Factorization for Robust PCA and Its Generalization

Abstract

Principal component analysis (PCA) can be regarded as approximating a data matrix with a low-rank one by imposing sparsity on its singular values, and its robust variant further captures sparse noise. In this paper, we extend such sparse matrix learning methods, and propose a novel unified framework called sparse additive matrix factorization (SAMF). SAMF systematically induces various types of sparsity by the so-called model-induced regularization in the Bayesian framework. We propose an iterative algorithm called the mean update (MU) for the variational Bayesian approximation to SAMF, which gives the global optimal solution for a large subset of parameters in each step. We demonstrate the usefulness of our method on artificial data and the foreground/background video separation.

Cite

Text

Nakajima et al. "Sparse Additive Matrix Factorization for Robust PCA and Its Generalization." Proceedings of the Fourth Asian Conference on Machine Learning, 2012.

Markdown

[Nakajima et al. "Sparse Additive Matrix Factorization for Robust PCA and Its Generalization." Proceedings of the Fourth Asian Conference on Machine Learning, 2012.](https://mlanthology.org/acml/2012/nakajima2012acml-sparse/)

BibTeX

@inproceedings{nakajima2012acml-sparse,
  title     = {{Sparse Additive Matrix Factorization for Robust PCA and Its Generalization}},
  author    = {Nakajima, Shinichi and Sugiyama, Masashi and Babacan, S. Derin},
  booktitle = {Proceedings of the Fourth Asian Conference on Machine Learning},
  year      = {2012},
  pages     = {301-316},
  volume    = {25},
  url       = {https://mlanthology.org/acml/2012/nakajima2012acml-sparse/}
}