Sparse Accelerated Exponential Weights

Abstract

We consider the stochastic optimization problem where a convex function is minimized observing recursively the gradients. We introduce SAEW, a new procedure that accelerates exponential weights procedures with the slow rate $1/\sqrt{T}$ to procedures achieving the fast rate $1/T$. Under the strong convexity of the risk, we achieve the optimal rate of convergence for approximating sparse parameters in $\mathbb{R}^d$. The acceleration is achieved by using successive averaging steps in an online fashion. The procedure also produces sparse estimators thanks to additional hard threshold steps.

Cite

Text

Gaillard and Wintenberger. "Sparse Accelerated Exponential Weights." International Conference on Artificial Intelligence and Statistics, 2017.

Markdown

[Gaillard and Wintenberger. "Sparse Accelerated Exponential Weights." International Conference on Artificial Intelligence and Statistics, 2017.](https://mlanthology.org/aistats/2017/gaillard2017aistats-sparse/)

BibTeX

@inproceedings{gaillard2017aistats-sparse,
  title     = {{Sparse Accelerated Exponential Weights}},
  author    = {Gaillard, Pierre and Wintenberger, Olivier},
  booktitle = {International Conference on Artificial Intelligence and Statistics},
  year      = {2017},
  pages     = {75-82},
  url       = {https://mlanthology.org/aistats/2017/gaillard2017aistats-sparse/}
}