SVM Versus Least Squares SVM

Abstract

We study the relationship between Support Vector Machines (SVM) and Least Squares SVM (LS-SVM). Our main result shows that under mild conditions, LS-SVM for binaryclass classifications is equivalent to the hard margin SVM based on the well-known Mahalanobis distance measure. We further study the asymptotics of the hard margin SVM when the data dimensionality tends to infinity with a fixed sample size. Using recently developed theory on the asymptotics of the distribution of the eigenvalues of the covariance matrix, we show that under mild conditions, the equivalence result holds for the traditional Euclidean distance measure. These equivalence results are further extended to the multi-class case. Experimental results confirm the presented theoretical analysis.

Cite

Text

Ye and Xiong. "SVM Versus Least Squares SVM." Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics, 2007.

Markdown

[Ye and Xiong. "SVM Versus Least Squares SVM." Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics, 2007.](https://mlanthology.org/aistats/2007/ye2007aistats-svm/)

BibTeX

@inproceedings{ye2007aistats-svm,
  title     = {{SVM Versus Least Squares SVM}},
  author    = {Ye, Jieping and Xiong, Tao},
  booktitle = {Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics},
  year      = {2007},
  pages     = {644-651},
  volume    = {2},
  url       = {https://mlanthology.org/aistats/2007/ye2007aistats-svm/}
}