Distributed Block-Diagonal Approximation Methods for Regularized Empirical Risk Minimization

Abstract

In recent years, there is a growing need to train machine learning models on a huge volume of data. Therefore, designing efficient distributed optimization algorithms for empirical risk minimization (ERM) has become an active and challenging research topic. In this paper, we propose a flexible framework for distributed ERM training through solving the dual problem, which provides a unified description and comparison of existing methods. Our approach requires only approximate solutions of the sub-problems involved in the optimization process, and is versatile to be applied on many large-scale machine learning problems including classification, regression, and structured prediction. We show that our framework enjoys global linear convergence for a broad class of non-strongly-convex problems, and some specific choices of the sub-problems can even achieve much faster convergence than existing approaches by a refined analysis. This improved convergence rate is also reflected in the superior empirical performance of our method.

Cite

Text

Lee and Chang. "Distributed Block-Diagonal Approximation Methods for Regularized Empirical Risk Minimization." Machine Learning, 2020. doi:10.1007/S10994-019-05859-2

Markdown

[Lee and Chang. "Distributed Block-Diagonal Approximation Methods for Regularized Empirical Risk Minimization." Machine Learning, 2020.](https://mlanthology.org/mlj/2020/lee2020mlj-distributed/) doi:10.1007/S10994-019-05859-2

BibTeX

@article{lee2020mlj-distributed,
  title     = {{Distributed Block-Diagonal Approximation Methods for Regularized Empirical Risk Minimization}},
  author    = {Lee, Ching-Pei and Chang, Kai-Wei},
  journal   = {Machine Learning},
  year      = {2020},
  pages     = {813-852},
  doi       = {10.1007/S10994-019-05859-2},
  volume    = {109},
  url       = {https://mlanthology.org/mlj/2020/lee2020mlj-distributed/}
}