Stochastic L-BFGS Revisited: Improved Convergence Rates and Practical Acceleration Strategies
Abstract
We revisit the stochastic limited-memory BFGS (L-BFGS) algorithm. By proposing a new framework for analyzing convergence, we theoretically improve the (linear) convergence rates and computational complexities of the stochastic L-BFGS algorithms in previous works. In addition, we propose several practical acceleration strategies to speed up the empirical performance of such algorithms. We also provide theoretical analyses for most of the strategies. Experiments on large-scale logistic and ridge regression problems demonstrate that our proposed strategies yield significant improvements via-`a-vis competing state-of-the-art algorithms.
Cite
Text
Zhao et al. "Stochastic L-BFGS Revisited: Improved Convergence Rates and Practical Acceleration Strategies." Conference on Uncertainty in Artificial Intelligence, 2017.Markdown
[Zhao et al. "Stochastic L-BFGS Revisited: Improved Convergence Rates and Practical Acceleration Strategies." Conference on Uncertainty in Artificial Intelligence, 2017.](https://mlanthology.org/uai/2017/zhao2017uai-stochastic/)BibTeX
@inproceedings{zhao2017uai-stochastic,
title = {{Stochastic L-BFGS Revisited: Improved Convergence Rates and Practical Acceleration Strategies}},
author = {Zhao, Renbo and Haskell, William B. and Tan, Vincent Y. F.},
booktitle = {Conference on Uncertainty in Artificial Intelligence},
year = {2017},
url = {https://mlanthology.org/uai/2017/zhao2017uai-stochastic/}
}