Block Coordinate Descent for Sparse NMF

Abstract

Nonnegative matrix factorization (NMF) has become a ubiquitous tool for data analysis. An important variant is the sparse NMF problem which arises when we explicitly require the learnt features to be sparse. A natural measure of sparsity is the L$_0$ norm, however its optimization is NP-hard. Mixed norms, such as L$_1$/L$_2$ measure, have been shown to model sparsity robustly, based on intuitive attributes that such measures need to satisfy. This is in contrast to computationally cheaper alternatives such as the plain L$_1$ norm. However, present algorithms designed for optimizing the mixed norm L$_1$/L$_2$ are slow and other formulations for sparse NMF have been proposed such as those based on L$_1$ and L$_0$ norms. Our proposed algorithm allows us to solve the mixed norm sparsity constraints while not sacrificing computation time. We present experimental evidence on real-world datasets that shows our new algorithm performs an order of magnitude faster compared to the current state-of-the-art solvers optimizing the mixed norm and is suitable for large-scale datasets.

Cite

Text

Potluru et al. "Block Coordinate Descent for Sparse NMF." International Conference on Learning Representations, 2013.

Markdown

[Potluru et al. "Block Coordinate Descent for Sparse NMF." International Conference on Learning Representations, 2013.](https://mlanthology.org/iclr/2013/potluru2013iclr-block/)

BibTeX

@inproceedings{potluru2013iclr-block,
  title     = {{Block Coordinate Descent for Sparse NMF}},
  author    = {Potluru, Vamsi K. and Plis, Sergey M. and Le Roux, Jonathan and Pearlmutter, Barak A. and Calhoun, Vince D. and Hayes, Thomas P.},
  booktitle = {International Conference on Learning Representations},
  year      = {2013},
  url       = {https://mlanthology.org/iclr/2013/potluru2013iclr-block/}
}