Metric Learning by Collapsing Classes
Abstract
We present an algorithm for learning a quadratic Gaussian metric (Maha- lanobis distance) for use in classification tasks. Our method relies on the simple geometric intuition that a good metric is one under which points in the same class are simultaneously near each other and far from points in the other classes. We construct a convex optimization problem whose solution generates such a metric by trying to collapse all examples in the same class to a single point and push examples in other classes infinitely far away. We show that when the metric we learn is used in simple clas- sifiers, it yields substantial improvements over standard alternatives on a variety of problems. We also discuss how the learned metric may be used to obtain a compact low dimensional feature representation of the original input space, allowing more efficient classification with very little reduction in performance.
Cite
Text
Globerson and Roweis. "Metric Learning by Collapsing Classes." Neural Information Processing Systems, 2005.Markdown
[Globerson and Roweis. "Metric Learning by Collapsing Classes." Neural Information Processing Systems, 2005.](https://mlanthology.org/neurips/2005/globerson2005neurips-metric/)BibTeX
@inproceedings{globerson2005neurips-metric,
title = {{Metric Learning by Collapsing Classes}},
author = {Globerson, Amir and Roweis, Sam T.},
booktitle = {Neural Information Processing Systems},
year = {2005},
pages = {451-458},
url = {https://mlanthology.org/neurips/2005/globerson2005neurips-metric/}
}