Communication-Efficient Distributed Optimization Using an Approximate Newton-Type Method
Abstract
We present a novel Newton-type method for distributed optimization, which is particularly well suited for stochastic optimization and learning problems. For quadratic objectives, the method enjoys a linear rate of convergence which provably \emphimproves with the data size, requiring an essentially constant number of iterations under reasonable assumptions. We provide theoretical and empirical evidence of the advantages of our method compared to other approaches, such as one-shot parameter averaging and ADMM.
Cite
Text
Shamir et al. "Communication-Efficient Distributed Optimization Using an Approximate Newton-Type Method." International Conference on Machine Learning, 2014.Markdown
[Shamir et al. "Communication-Efficient Distributed Optimization Using an Approximate Newton-Type Method." International Conference on Machine Learning, 2014.](https://mlanthology.org/icml/2014/shamir2014icml-communicationefficient/)BibTeX
@inproceedings{shamir2014icml-communicationefficient,
title = {{Communication-Efficient Distributed Optimization Using an Approximate Newton-Type Method}},
author = {Shamir, Ohad and Srebro, Nati and Zhang, Tong},
booktitle = {International Conference on Machine Learning},
year = {2014},
pages = {1000-1008},
volume = {32},
url = {https://mlanthology.org/icml/2014/shamir2014icml-communicationefficient/}
}