Discriminant Adaptive Nearest Neighbor Classification and Regression
Abstract
Nearest neighbor classification expects the class conditional prob(cid:173) abilities to be locally constant, and suffers from bias in high di(cid:173) mensions We propose a locally adaptive form of nearest neighbor classification to try to finesse this curse of dimensionality. We use a local linear discriminant analysis to estimate an effective met(cid:173) ric for computing neighborhoods. We determine the local decision boundaries from centroid information, and then shrink neighbor(cid:173) hoods in directions orthogonal to these local decision boundaries, and elongate them parallel to the boundaries. Thereafter, any neighborhood-based classifier can be employed, using the modified neighborhoods. We also propose a method for global dimension reduction, that combines local dimension information. We indicate how these techniques can be extended to the regression problem.
Cite
Text
Hastie and Tibshirani. "Discriminant Adaptive Nearest Neighbor Classification and Regression." Neural Information Processing Systems, 1995.Markdown
[Hastie and Tibshirani. "Discriminant Adaptive Nearest Neighbor Classification and Regression." Neural Information Processing Systems, 1995.](https://mlanthology.org/neurips/1995/hastie1995neurips-discriminant/)BibTeX
@inproceedings{hastie1995neurips-discriminant,
title = {{Discriminant Adaptive Nearest Neighbor Classification and Regression}},
author = {Hastie, Trevor and Tibshirani, Robert},
booktitle = {Neural Information Processing Systems},
year = {1995},
pages = {409-415},
url = {https://mlanthology.org/neurips/1995/hastie1995neurips-discriminant/}
}