Communication-Efficient Distributed SVD via Local Power Iterations
Abstract
We study distributed computing of the truncated singular value decomposition (SVD). We develop an algorithm that we call \texttt{LocalPower} for improving communication efficiency. Specifically, we uniformly partition the dataset among $m$ nodes and alternate between multiple (precisely $p$) local power iterations and one global aggregation. In the aggregation, we propose to weight each local eigenvector matrix with orthogonal Procrustes transformation (OPT). As a practical surrogate of OPT, sign-fixing, which uses a diagonal matrix with $\pm 1$ entries as weights, has better computation complexity and stability in experiments. We theoretically show that under certain assumptions \texttt{LocalPower} lowers the required number of communications by a factor of $p$ to reach a constant accuracy. We also show that the strategy of periodically decaying $p$ helps obtain high-precision solutions. We conduct experiments to demonstrate the effectiveness of \texttt{LocalPower}.
Cite
Text
Li et al. "Communication-Efficient Distributed SVD via Local Power Iterations." International Conference on Machine Learning, 2021.Markdown
[Li et al. "Communication-Efficient Distributed SVD via Local Power Iterations." International Conference on Machine Learning, 2021.](https://mlanthology.org/icml/2021/li2021icml-communicationefficient/)BibTeX
@inproceedings{li2021icml-communicationefficient,
title = {{Communication-Efficient Distributed SVD via Local Power Iterations}},
author = {Li, Xiang and Wang, Shusen and Chen, Kun and Zhang, Zhihua},
booktitle = {International Conference on Machine Learning},
year = {2021},
pages = {6504-6514},
volume = {139},
url = {https://mlanthology.org/icml/2021/li2021icml-communicationefficient/}
}