A Superlinearly-Convergent Proximal Newton-Type Method for the Optimization of Finite Sums
Abstract
We consider the problem of minimizing the strongly convex sum of a finite number of convex functions. Standard algorithms for solving this problem in the class of incremental/stochastic methods have at most a linear convergence rate. We propose a new incremental method whose convergence rate is superlinear – the Newton-type incremental method (NIM). The idea of the method is to introduce a model of the objective with the same sum-of-functions structure and further update a single component of the model per iteration. We prove that NIM has a superlinear local convergence rate and linear global convergence rate. Experiments show that the method is very effective for problems with a large number of functions and a small number of variables.
Cite
Text
Rodomanov and Kropotov. "A Superlinearly-Convergent Proximal Newton-Type Method for the Optimization of Finite Sums." International Conference on Machine Learning, 2016.Markdown
[Rodomanov and Kropotov. "A Superlinearly-Convergent Proximal Newton-Type Method for the Optimization of Finite Sums." International Conference on Machine Learning, 2016.](https://mlanthology.org/icml/2016/rodomanov2016icml-superlinearlyconvergent/)BibTeX
@inproceedings{rodomanov2016icml-superlinearlyconvergent,
title = {{A Superlinearly-Convergent Proximal Newton-Type Method for the Optimization of Finite Sums}},
author = {Rodomanov, Anton and Kropotov, Dmitry},
booktitle = {International Conference on Machine Learning},
year = {2016},
pages = {2597-2605},
volume = {48},
url = {https://mlanthology.org/icml/2016/rodomanov2016icml-superlinearlyconvergent/}
}