Decentralized Dictionary Learning over Time-Varying Digraphs
Abstract
This paper studies Dictionary Learning problems wherein the learning task is distributed over a multi-agent network, modeled as a time-varying directed graph. This formulation is relevant, for instance, in Big Data scenarios where massive amounts of data are collected/stored in different locations (e.g., sensors, clouds) and aggregating and/or processing all data in a fusion center might be inefficient or unfeasible, due to resource limitations, communication overheads or privacy issues. We develop a unified decentralized algorithmic framework for this class of nonconvex problems, which is proved to converge to stationary solutions at a sublinear rate. The new method hinges on Successive Convex Approximation techniques, coupled with a decentralized tracking mechanism aiming at locally estimating the gradient of the smooth part of the sum-utility. To the best of our knowledge, this is the first provably convergent decentralized algorithm for Dictionary Learning and, more generally, bi-convex problems over (time-varying) (di)graphs.
Cite
Text
Daneshmand et al. "Decentralized Dictionary Learning over Time-Varying Digraphs." Journal of Machine Learning Research, 2019.Markdown
[Daneshmand et al. "Decentralized Dictionary Learning over Time-Varying Digraphs." Journal of Machine Learning Research, 2019.](https://mlanthology.org/jmlr/2019/daneshmand2019jmlr-decentralized/)BibTeX
@article{daneshmand2019jmlr-decentralized,
title = {{Decentralized Dictionary Learning over Time-Varying Digraphs}},
author = {Daneshmand, Amir and Sun, Ying and Scutari, Gesualdo and Facchinei, Francisco and Sadler, Brian M.},
journal = {Journal of Machine Learning Research},
year = {2019},
pages = {1-62},
volume = {20},
url = {https://mlanthology.org/jmlr/2019/daneshmand2019jmlr-decentralized/}
}