Algorithms for Non-Negative Matrix Factorization
Abstract
Non-negative matrix factorization (NMF) has previously been shown to be a useful decomposition for multivariate data. Two different multi- plicative algorithms for NMF are analyzed. They differ only slightly in the multiplicative factor used in the update rules. One algorithm can be shown to minimize the conventional least squares error while the other minimizes the generalized Kullback-Leibler divergence. The monotonic convergence of both algorithms can be proven using an auxiliary func- tion analogous to that used for proving convergence of the Expectation- Maximization algorithm. The algorithms can also be interpreted as diag- onally rescaled gradient descent, where the rescaling factor is optimally chosen to ensure convergence.
Cite
Text
Lee and Seung. "Algorithms for Non-Negative Matrix Factorization." Neural Information Processing Systems, 2000.Markdown
[Lee and Seung. "Algorithms for Non-Negative Matrix Factorization." Neural Information Processing Systems, 2000.](https://mlanthology.org/neurips/2000/lee2000neurips-algorithms/)BibTeX
@inproceedings{lee2000neurips-algorithms,
title = {{Algorithms for Non-Negative Matrix Factorization}},
author = {Lee, Daniel D. and Seung, H. Sebastian},
booktitle = {Neural Information Processing Systems},
year = {2000},
pages = {556-562},
url = {https://mlanthology.org/neurips/2000/lee2000neurips-algorithms/}
}