DID: Distributed Incremental Block Coordinate Descent for Nonnegative Matrix Factorization

Abstract

Nonnegative matrix factorization (NMF) has attracted much attention in the last decade as a dimension reduction method in many applications. Due to the explosion in the size of data, naturally the samples are collected and stored distributively in local computational nodes. Thus, there is a growing need to develop algorithms in a distributed memory architecture. We propose a novel distributed algorithm, called distributed incremental block coordinate descent (DID), to solve the problem. By adapting the block coordinate descent framework, closed-form update rules are obtained in DID. Moreover, DID performs updates incrementally based on the most recently updated residual matrix. As a result, only one communication step per iteration is required. The correctness, efficiency, and scalability of the proposed algorithm are verified in a series of numerical experiments.

Cite

Text

Gao and Chu. "DID: Distributed Incremental Block Coordinate Descent for Nonnegative Matrix Factorization." AAAI Conference on Artificial Intelligence, 2018. doi:10.1609/AAAI.V32I1.11736

Markdown

[Gao and Chu. "DID: Distributed Incremental Block Coordinate Descent for Nonnegative Matrix Factorization." AAAI Conference on Artificial Intelligence, 2018.](https://mlanthology.org/aaai/2018/gao2018aaai-distributed/) doi:10.1609/AAAI.V32I1.11736

BibTeX

@inproceedings{gao2018aaai-distributed,
  title     = {{DID: Distributed Incremental Block Coordinate Descent for Nonnegative Matrix Factorization}},
  author    = {Gao, Tianxiang and Chu, Chris},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2018},
  pages     = {2991-2998},
  doi       = {10.1609/AAAI.V32I1.11736},
  url       = {https://mlanthology.org/aaai/2018/gao2018aaai-distributed/}
}