DiSCO: Distributed Optimization for Self-Concordant Empirical Loss
Abstract
We propose a new distributed algorithm for empirical risk minimization in machine learning. The algorithm is based on an inexact damped Newton method, where the inexact Newton steps are computed by a distributed preconditioned conjugate gradient method. We analyze its iteration complexity and communication efficiency for minimizing self-concordant empirical loss functions, and discuss the results for distributed ridge regression, logistic regression and binary classification with a smoothed hinge loss. In a standard setting for supervised learning, where the n data points are i.i.d. sampled and when the regularization parameter scales as 1/\sqrt{n}, we show that the proposed algorithm is communication efficient: the required round of communication does not increase with the sample size n, and only grows slowly with the number of machines.
Cite
Text
Zhang and Lin. "DiSCO: Distributed Optimization for Self-Concordant Empirical Loss." International Conference on Machine Learning, 2015.Markdown
[Zhang and Lin. "DiSCO: Distributed Optimization for Self-Concordant Empirical Loss." International Conference on Machine Learning, 2015.](https://mlanthology.org/icml/2015/zhang2015icml-disco/)BibTeX
@inproceedings{zhang2015icml-disco,
title = {{DiSCO: Distributed Optimization for Self-Concordant Empirical Loss}},
author = {Zhang, Yuchen and Lin, Xiao},
booktitle = {International Conference on Machine Learning},
year = {2015},
pages = {362-370},
volume = {37},
url = {https://mlanthology.org/icml/2015/zhang2015icml-disco/}
}