Exponential Convergence of Testing Error for Stochastic Gradient Methods
Abstract
We consider binary classification problems with positive definite kernels and square loss, and study the convergence rates of stochastic gradient methods. We show that while the excess testing loss (squared loss) converges slowly to zero as the number of observations (and thus iterations) goes to infinity, the testing error (classification error) converges exponentially fast if low-noise conditions are assumed.
Cite
Text
Pillaud-Vivien et al. "Exponential Convergence of Testing Error for Stochastic Gradient Methods." Annual Conference on Computational Learning Theory, 2018.Markdown
[Pillaud-Vivien et al. "Exponential Convergence of Testing Error for Stochastic Gradient Methods." Annual Conference on Computational Learning Theory, 2018.](https://mlanthology.org/colt/2018/pillaudvivien2018colt-exponential/)BibTeX
@inproceedings{pillaudvivien2018colt-exponential,
title = {{Exponential Convergence of Testing Error for Stochastic Gradient Methods}},
author = {Pillaud-Vivien, Loucas and Rudi, Alessandro and Bach, Francis R.},
booktitle = {Annual Conference on Computational Learning Theory},
year = {2018},
pages = {250-296},
url = {https://mlanthology.org/colt/2018/pillaudvivien2018colt-exponential/}
}