Unwrapping ADMM: Efficient Distributed Computing via Transpose Reduction
Abstract
Recent approaches to distributed model fitting rely heavily on consensus ADMM, where each node solves small sub-problems using only local data. We propose iterative methods that solve {\em global} sub-problems over an entire distributed dataset. This is possible using transpose reduction strategies that allow a single node to solve least-squares over massive datasets without putting all the data in one place. This results in simple iterative methods that avoid the expensive inner loops required for consensus methods. To demonstrate the efficiency of this approach, we fit linear classifiers and sparse linear models to datasets over 5 Tb in size using a distributed implementation with over 7000 cores in far less time than previous approaches.
Cite
Text
Goldstein et al. "Unwrapping ADMM: Efficient Distributed Computing via Transpose Reduction." International Conference on Artificial Intelligence and Statistics, 2016.Markdown
[Goldstein et al. "Unwrapping ADMM: Efficient Distributed Computing via Transpose Reduction." International Conference on Artificial Intelligence and Statistics, 2016.](https://mlanthology.org/aistats/2016/goldstein2016aistats-unwrapping/)BibTeX
@inproceedings{goldstein2016aistats-unwrapping,
title = {{Unwrapping ADMM: Efficient Distributed Computing via Transpose Reduction}},
author = {Goldstein, Tom and Taylor, Gavin and Barabin, Kawika and Sayre, Kent},
booktitle = {International Conference on Artificial Intelligence and Statistics},
year = {2016},
pages = {1151-1158},
url = {https://mlanthology.org/aistats/2016/goldstein2016aistats-unwrapping/}
}