Fast Rates for Exp-Concave Empirical Risk Minimization

Abstract

We consider Empirical Risk Minimization (ERM) in the context of stochastic optimization with exp-concave and smooth losses---a general optimization framework that captures several important learning problems including linear and logistic regression, learning SVMs with the squared hinge-loss, portfolio selection and more. In this setting, we establish the first evidence that ERM is able to attain fast generalization rates, and show that the expected loss of the ERM solution in $d$ dimensions converges to the optimal expected loss in a rate of $d/n$. This rate matches existing lower bounds up to constants and improves by a $\log{n}$ factor upon the state-of-the-art, which is only known to be attained by an online-to-batch conversion of computationally expensive online algorithms.

Cite

Text

Koren and Levy. "Fast Rates for Exp-Concave Empirical Risk Minimization." Neural Information Processing Systems, 2015.

Markdown

[Koren and Levy. "Fast Rates for Exp-Concave Empirical Risk Minimization." Neural Information Processing Systems, 2015.](https://mlanthology.org/neurips/2015/koren2015neurips-fast/)

BibTeX

@inproceedings{koren2015neurips-fast,
  title     = {{Fast Rates for Exp-Concave Empirical Risk Minimization}},
  author    = {Koren, Tomer and Levy, Kfir},
  booktitle = {Neural Information Processing Systems},
  year      = {2015},
  pages     = {1477-1485},
  url       = {https://mlanthology.org/neurips/2015/koren2015neurips-fast/}
}