Large Margin Component Analysis

Abstract

Metric learning has been shown to significantly improve the accuracy of k-nearest neighbor (kNN) classification. In problems involving thousands of features, dis- tance learning algorithms cannot be used due to overfitting and high computa- tional complexity. In such cases, previous work has relied on a two-step solution: first apply dimensionality reduction methods to the data, and then learn a met- ric in the resulting low-dimensional subspace. In this paper we show that better classification performance can be achieved by unifying the objectives of dimen- sionality reduction and metric learning. We propose a method that solves for the low-dimensional projection of the inputs, which minimizes a metric objective aimed at separating points in different classes by a large margin. This projection is defined by a significantly smaller number of parameters than metrics learned in input space, and thus our optimization reduces the risks of overfitting. Theory and results are presented for both a linear as well as a kernelized version of the algorithm. Overall, we achieve classification rates similar, and in several cases superior, to those of support vector machines.

Cite

Text

Torresani and Lee. "Large Margin Component Analysis." Neural Information Processing Systems, 2006.

Markdown

[Torresani and Lee. "Large Margin Component Analysis." Neural Information Processing Systems, 2006.](https://mlanthology.org/neurips/2006/torresani2006neurips-large/)

BibTeX

@inproceedings{torresani2006neurips-large,
  title     = {{Large Margin Component Analysis}},
  author    = {Torresani, Lorenzo and Lee, Kuang-chih},
  booktitle = {Neural Information Processing Systems},
  year      = {2006},
  pages     = {1385-1392},
  url       = {https://mlanthology.org/neurips/2006/torresani2006neurips-large/}
}