Fast Rates for Regularized Objectives
Abstract
We show that the empirical minimizer of a stochastic strongly convex objective, where the stochastic component is linear, converges to the population minimizer with rate $O(1/n)$. The result applies, in particular, to the SVM objective. Thus, we get a rate of $O(1/n)$ on the convergence of the SVM objective to its infinite data limit. We demonstrate how this is essential for obtaining tight oracle inequalities for SVMs. The results extend also to strong convexity with respect to other $\ellnorm_p$ norms, and so also to objectives regularized using other norms.
Cite
Text
Sridharan et al. "Fast Rates for Regularized Objectives." Neural Information Processing Systems, 2008.Markdown
[Sridharan et al. "Fast Rates for Regularized Objectives." Neural Information Processing Systems, 2008.](https://mlanthology.org/neurips/2008/sridharan2008neurips-fast/)BibTeX
@inproceedings{sridharan2008neurips-fast,
title = {{Fast Rates for Regularized Objectives}},
author = {Sridharan, Karthik and Shalev-shwartz, Shai and Srebro, Nathan},
booktitle = {Neural Information Processing Systems},
year = {2008},
pages = {1545-1552},
url = {https://mlanthology.org/neurips/2008/sridharan2008neurips-fast/}
}