Decentralized Matrix Sensing: Statistical Guarantees and Fast Convergence
Abstract
We explore the matrix sensing problem from near-isotropic linear measurements, distributed across a network of agents modeled as an undirected graph, with no centralized node. We provide the first study of statistical, computational/communication guarantees for a decentralized gradient algorithm that solves the (nonconvex) Burer-Monteiro type decomposition associated to the low-rank matrix estimation. With small random initialization, the algorithm displays an approximate two-phase convergence: (i) a spectral phase that aligns the iterates' column space with the underlying low-rank matrix, mimicking centralized spectral initialization (not directly implementable over networks); and (ii) a local refinement phase that diverts the iterates from certain degenerate saddle points, while ensuring swift convergence to the underlying low-rank matrix. Central to our analysis is a novel "in-network" Restricted Isometry Property which accommodates for the decentralized nature of the optimization, revealing an intriguing interplay between sample complexity and network connectivity, topology, and communication complexity.
Cite
Text
Maros and Scutari. "Decentralized Matrix Sensing: Statistical Guarantees and Fast Convergence." Neural Information Processing Systems, 2023.Markdown
[Maros and Scutari. "Decentralized Matrix Sensing: Statistical Guarantees and Fast Convergence." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/maros2023neurips-decentralized/)BibTeX
@inproceedings{maros2023neurips-decentralized,
title = {{Decentralized Matrix Sensing: Statistical Guarantees and Fast Convergence}},
author = {Maros, Marie and Scutari, Gesualdo},
booktitle = {Neural Information Processing Systems},
year = {2023},
url = {https://mlanthology.org/neurips/2023/maros2023neurips-decentralized/}
}