Stochastic Composite Least-Squares Regression with Convergence Rate $O(1/n)$

Abstract

We consider the minimization of composite objective functions composed of the expectation of quadratic functions and an arbitrary convex function. We study the stochastic dual averaging algorithm with a constant step-size, showing that it leads to a convergence rate of O(1/n) without strong convexity assumptions. This thus extends earlier results on least-squares regression with the Euclidean geometry to (a) all convex regularizers and constraints, and (b) all geometries represented by a Bregman divergence. This is achieved by a new proof technique that relates stochastic and deterministic recursions

Cite

Text

Flammarion and Bach. "Stochastic Composite Least-Squares Regression with Convergence Rate $O(1/n)$." Proceedings of the 2017 Conference on Learning Theory, 2017.

Markdown

[Flammarion and Bach. "Stochastic Composite Least-Squares Regression with Convergence Rate $O(1/n)$." Proceedings of the 2017 Conference on Learning Theory, 2017.](https://mlanthology.org/colt/2017/flammarion2017colt-stochastic/)

BibTeX

@inproceedings{flammarion2017colt-stochastic,
  title     = {{Stochastic Composite Least-Squares Regression with Convergence Rate $O(1/n)$}},
  author    = {Flammarion, Nicolas and Bach, Francis},
  booktitle = {Proceedings of the 2017 Conference on Learning Theory},
  year      = {2017},
  pages     = {831-875},
  volume    = {65},
  url       = {https://mlanthology.org/colt/2017/flammarion2017colt-stochastic/}
}