Mixtures of Large Margin Nearest Neighbor Classifiers
Abstract
The accuracy of the k -nearest neighbor algorithm depends on the distance function used to measure similarity between instances. Methods have been proposed in the literature to learn a good distance function from a labelled training set. One such method is the large margin nearest neighbor classifier that learns a global Mahalanobis distance. We propose a mixture of such classifiers where a gating function divides the input space into regions and a separate distance function is learned in each region in a lower dimensional manifold. We show that such an extension improves accuracy and allows visualization.
Cite
Text
Semerci and Alpaydin. "Mixtures of Large Margin Nearest Neighbor Classifiers." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2013. doi:10.1007/978-3-642-40991-2_43Markdown
[Semerci and Alpaydin. "Mixtures of Large Margin Nearest Neighbor Classifiers." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2013.](https://mlanthology.org/ecmlpkdd/2013/semerci2013ecmlpkdd-mixtures/) doi:10.1007/978-3-642-40991-2_43BibTeX
@inproceedings{semerci2013ecmlpkdd-mixtures,
title = {{Mixtures of Large Margin Nearest Neighbor Classifiers}},
author = {Semerci, Murat and Alpaydin, Ethem},
booktitle = {European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases},
year = {2013},
pages = {675-688},
doi = {10.1007/978-3-642-40991-2_43},
url = {https://mlanthology.org/ecmlpkdd/2013/semerci2013ecmlpkdd-mixtures/}
}