The Support Vector Decomposition Machine

Abstract

In machine learning problems with tens of thousands of features and only dozens or hundreds of independent training examples, dimensionality reduction is essential for good learning performance. In previous work, many researchers have treated the learning problem in two separate phases: first use an algorithm such as singular value decomposition to reduce the dimensionality of the data set, and then use a classification algorithm such as naïve Bayes or support vector machines to learn a classifier. We demonstrate that it is possible to combine the two goals of dimensionality reduction and classification into a single learning objective, and present a novel and efficient algorithm which optimizes this objective directly. We present experimental results in fMRI analysis which show that we can achieve better learning performance and lower-dimensional representations than two-phase approaches can.

Cite

Text

Pereira and Gordon. "The Support Vector Decomposition Machine." International Conference on Machine Learning, 2006. doi:10.1145/1143844.1143931

Markdown

[Pereira and Gordon. "The Support Vector Decomposition Machine." International Conference on Machine Learning, 2006.](https://mlanthology.org/icml/2006/pereira2006icml-support/) doi:10.1145/1143844.1143931

BibTeX

@inproceedings{pereira2006icml-support,
  title     = {{The Support Vector Decomposition Machine}},
  author    = {Pereira, Francisco and Gordon, Geoffrey J.},
  booktitle = {International Conference on Machine Learning},
  year      = {2006},
  pages     = {689-696},
  doi       = {10.1145/1143844.1143931},
  url       = {https://mlanthology.org/icml/2006/pereira2006icml-support/}
}