DINO: Distributed Newton-Type Optimization Method

Abstract

We present a novel communication-efficient Newton-type algorithm for finite-sum optimization over a distributed computing environment. Our method, named DINO, overcomes both theoretical and practical shortcomings of similar existing methods. Under minimal assumptions, we guarantee global sub-linear convergence of DINO to a first-order stationary point for general non-convex functions and arbitrary data distribution over the network. Furthermore, for functions satisfying Polyak-Lojasiewicz (PL) inequality, we show that DINO enjoys a linear convergence rate. Our proposed algorithm is practically parameter free, in that it will converge regardless of the selected hyper-parameters, which are easy to tune. Additionally, its sub-problems are simple linear least-squares, for which efficient solvers exist, and numerical simulations demonstrate the efficiency of DINO as compared with similar alternatives.

Cite

Text

Crane and Roosta. "DINO: Distributed Newton-Type Optimization Method." International Conference on Machine Learning, 2020.

Markdown

[Crane and Roosta. "DINO: Distributed Newton-Type Optimization Method." International Conference on Machine Learning, 2020.](https://mlanthology.org/icml/2020/crane2020icml-dino/)

BibTeX

@inproceedings{crane2020icml-dino,
  title     = {{DINO: Distributed Newton-Type Optimization Method}},
  author    = {Crane, Rixon and Roosta, Fred},
  booktitle = {International Conference on Machine Learning},
  year      = {2020},
  pages     = {2174-2184},
  volume    = {119},
  url       = {https://mlanthology.org/icml/2020/crane2020icml-dino/}
}