Hybrid Stochastic-Deterministic Minibatch Proximal Gradient: Less-than-Single-Pass Optimization with Nearly Optimal Generalization

Abstract

Stochastic variance-reduced gradient (SVRG) algorithms have been shown to work favorably in solving large-scale learning problems. Despite the remarkable success, the stochastic gradient complexity of SVRG-type algorithms usually scales linearly with data size and thus could still be expensive for huge data. To address this deficiency, we propose a hybrid stochastic-deterministic minibatch proximal gradient (\HSDAN) algorithm for strongly-convex problems that enjoys provably improved data-size-independent complexity guarantees. More precisely, for quadratic loss $F(\wm)$ of $n$ components, we prove that \HSDAN can attain an $\epsilon$-optimization-error $\EE[F(\wm)-F(\wms)] \leq \epsilon$ within $\mathcal{O}\Big(\frac{\kappa^{1.5}\epsilon^{0.75} \log^{1.5}(\frac{1}{\epsilon}) + 1}{\epsilon} \wedge \Big(\kappa \sqrt{n} \log^{1.5}\big(\frac{1}{\epsilon}\big) + n \log \big(\frac{1}{\epsilon}\big) \Big) \Big)$ stochastic gradient evaluations, where $\kappa$ is condition number. For generic strongly convex loss functions, we prove a nearly identical complexity bound though at the cost of slightly increased logarithmic factors. For large-scale learning problems, our complexity bounds are superior to those of the prior state-of-the-art SVRG algorithms with or without dependence on data size. Particularly, in the case of $\epsilon\!=\!\mathcal{O}\big(1/\sqrt{n}\big)$ which is at the order of intrinsic excess error bound of a learning model and thus sufficient for generalization, the stochastic gradient complexity bounds of \HSDAN for quadratic and generic loss functions are respectively $\mathcal{O} (n^{0.875}\log^{1.5}(n))$ and $\mathcal{O} (n^{0.875}\log^{2.25}(n))$, which to our best knowledge, for the first time achieve optimal generalization in less than a single pass over data. Extensive numerical results demonstrate the computational advantages of our algorithm over the prior ones.

Cite

Text

Zhou and Yuan. "Hybrid Stochastic-Deterministic Minibatch Proximal Gradient: Less-than-Single-Pass Optimization with Nearly Optimal Generalization." International Conference on Machine Learning, 2020.

Markdown

[Zhou and Yuan. "Hybrid Stochastic-Deterministic Minibatch Proximal Gradient: Less-than-Single-Pass Optimization with Nearly Optimal Generalization." International Conference on Machine Learning, 2020.](https://mlanthology.org/icml/2020/zhou2020icml-hybrid/)

BibTeX

@inproceedings{zhou2020icml-hybrid,
  title     = {{Hybrid Stochastic-Deterministic Minibatch Proximal Gradient: Less-than-Single-Pass Optimization with Nearly Optimal Generalization}},
  author    = {Zhou, Pan and Yuan, Xiao-Tong},
  booktitle = {International Conference on Machine Learning},
  year      = {2020},
  pages     = {11556-11565},
  volume    = {119},
  url       = {https://mlanthology.org/icml/2020/zhou2020icml-hybrid/}
}