Two View Learning: SVM-2K, Theory and Practice

Abstract

Kernel methods make it relatively easy to define complex highdimensional feature spaces. This raises the question of how we can identify the relevant subspaces for a particular learning task. When two views of the same phenomenon are available kernel Canonical Correlation Analysis (KCCA) has been shown to be an effective preprocessing step that can improve the performance of classification algorithms such as the Support Vector Machine (SVM). This paper takes this observation to its logical conclusion and proposes a method that combines this two stage learning (KCCA followed by SVM) into a single optimisation termed SVM-2K. We present both experimental and theoretical analysis of the approach showing encouraging results and insights.

Cite

Text

Farquhar et al. "Two View Learning: SVM-2K, Theory and Practice." Neural Information Processing Systems, 2005.

Markdown

[Farquhar et al. "Two View Learning: SVM-2K, Theory and Practice." Neural Information Processing Systems, 2005.](https://mlanthology.org/neurips/2005/farquhar2005neurips-two/)

BibTeX

@inproceedings{farquhar2005neurips-two,
  title     = {{Two View Learning: SVM-2K, Theory and Practice}},
  author    = {Farquhar, Jason and Hardoon, David and Meng, Hongying and Shawe-taylor, John S. and Szedmák, Sándor},
  booktitle = {Neural Information Processing Systems},
  year      = {2005},
  pages     = {355-362},
  url       = {https://mlanthology.org/neurips/2005/farquhar2005neurips-two/}
}