Large Margin Non-Linear Embedding

Abstract

It is common in classification methods to first place data in a vector space and then learn decision boundaries. We propose reversing that process: for fixed decision boundaries, we "learn" the location of the data. This way we (i) do not need a metric (or even stronger structure) - pairwise dissimilarities suffice; and additionally (ii) produce low-dimensional embeddings that can be analyzed visually. We achieve this by combining an entropy-based embedding method with an entropy-based version of semi-supervised logistic regression. We present results for clustering and semi-supervised classification.

Cite

Text

Zien and Candela. "Large Margin Non-Linear Embedding." International Conference on Machine Learning, 2005. doi:10.1145/1102351.1102485

Markdown

[Zien and Candela. "Large Margin Non-Linear Embedding." International Conference on Machine Learning, 2005.](https://mlanthology.org/icml/2005/zien2005icml-large/) doi:10.1145/1102351.1102485

BibTeX

@inproceedings{zien2005icml-large,
  title     = {{Large Margin Non-Linear Embedding}},
  author    = {Zien, Alexander and Candela, Joaquin Quiñonero},
  booktitle = {International Conference on Machine Learning},
  year      = {2005},
  pages     = {1060-1067},
  doi       = {10.1145/1102351.1102485},
  url       = {https://mlanthology.org/icml/2005/zien2005icml-large/}
}