SVM-Optimization and Steepest-Descent Line Search
Abstract
We consider (a subclass of) convex quadratic optimization problems and analyze decomposition algorithms that perform, at least approximately, steepest-descent exact line search. We show that these algorithms, when implemented properly, are within $\varepsilon$ of optimality after $O(\log 1/\varepsilon)$ iterations for strictly convex cost functions, and after $O(1/\varepsilon)$ iterations in the general case.1 Our analysis is general enough to cover the algorithms that are used in software packages like SVMTorch and (first or second order) LibSVM. To the best of our knowledge, this is the first paper coming up with a convergence rate for these algorithms without introducing unnecessarily restrictive assumptions.
Cite
Text
Simon and List. "SVM-Optimization and Steepest-Descent Line Search." Annual Conference on Computational Learning Theory, 2009.Markdown
[Simon and List. "SVM-Optimization and Steepest-Descent Line Search." Annual Conference on Computational Learning Theory, 2009.](https://mlanthology.org/colt/2009/simon2009colt-svm/)BibTeX
@inproceedings{simon2009colt-svm,
title = {{SVM-Optimization and Steepest-Descent Line Search}},
author = {Simon, Hans Ulrich and List, Nikolas},
booktitle = {Annual Conference on Computational Learning Theory},
year = {2009},
url = {https://mlanthology.org/colt/2009/simon2009colt-svm/}
}