Second-Order Stochastic Optimization for Machine Learning in Linear Time

Abstract

First-order stochastic methods are the state-of-the-art in large-scale machine learning optimization owing to efficient per-iteration complexity. Second-order methods, while able to provide faster convergence, have been much less explored due to the high cost of computing the second-order information. In this paper we develop second-order stochastic methods for optimization problems in machine learning that match the per- iteration cost of gradient based methods, and in certain settings improve upon the overall running time over popular first-order methods. Furthermore, our algorithm has the desirable property of being implementable in time linear in the sparsity of the input data.

Cite

Text

Agarwal et al. "Second-Order Stochastic Optimization for Machine Learning in Linear Time." Journal of Machine Learning Research, 2017.

Markdown

[Agarwal et al. "Second-Order Stochastic Optimization for Machine Learning in Linear Time." Journal of Machine Learning Research, 2017.](https://mlanthology.org/jmlr/2017/agarwal2017jmlr-secondorder/)

BibTeX

@article{agarwal2017jmlr-secondorder,
  title     = {{Second-Order Stochastic Optimization for Machine Learning in Linear Time}},
  author    = {Agarwal, Naman and Bullins, Brian and Hazan, Elad},
  journal   = {Journal of Machine Learning Research},
  year      = {2017},
  pages     = {1-40},
  volume    = {18},
  url       = {https://mlanthology.org/jmlr/2017/agarwal2017jmlr-secondorder/}
}