SVM Optimization: Inverse Dependence on Training Set Size
Abstract
We discuss how the runtime of SVM optimization should decrease as the size of the training data increases. We present theoretical and empirical results demonstrating how a simple subgradient descent approach indeed displays such behavior, at least for linear kernels.
Cite
Text
Shalev-Shwartz and Srebro. "SVM Optimization: Inverse Dependence on Training Set Size." International Conference on Machine Learning, 2008. doi:10.1145/1390156.1390273Markdown
[Shalev-Shwartz and Srebro. "SVM Optimization: Inverse Dependence on Training Set Size." International Conference on Machine Learning, 2008.](https://mlanthology.org/icml/2008/shalevshwartz2008icml-svm/) doi:10.1145/1390156.1390273BibTeX
@inproceedings{shalevshwartz2008icml-svm,
title = {{SVM Optimization: Inverse Dependence on Training Set Size}},
author = {Shalev-Shwartz, Shai and Srebro, Nathan},
booktitle = {International Conference on Machine Learning},
year = {2008},
pages = {928-935},
doi = {10.1145/1390156.1390273},
url = {https://mlanthology.org/icml/2008/shalevshwartz2008icml-svm/}
}