Dimensionality Reduction Through Sub-Space Mapping for Nearest Neighbor Algorithms
Abstract
Many learning algorithms make an implicit assumption that all the attributes present in the data are relevant to a learning task. However, several studies have demonstrated that this assumption rarely holds; for many supervised learning algorithms, the inclusion of irrelevant or redundant attributes can result in a degradation in classification accuracy. While a variety of different methods for dimensionality reduction exist, many of these are only appropriate for datasets which contain a small number of attributes (e.g. < 20). This paper presents an alternative approach to dimensionality reduction, and demonstrates how it can be combined with a Nearest Neighbour learning algorithm. We present an empirical evaluation of this approach, and contrast its performance with two related techniques; a Monte-Carlo wrapper and an Information Gain -based filter approach.
Cite
Text
Payne and Edwards. "Dimensionality Reduction Through Sub-Space Mapping for Nearest Neighbor Algorithms." European Conference on Machine Learning, 2000. doi:10.1007/3-540-45164-1_35Markdown
[Payne and Edwards. "Dimensionality Reduction Through Sub-Space Mapping for Nearest Neighbor Algorithms." European Conference on Machine Learning, 2000.](https://mlanthology.org/ecmlpkdd/2000/payne2000ecml-dimensionality/) doi:10.1007/3-540-45164-1_35BibTeX
@inproceedings{payne2000ecml-dimensionality,
title = {{Dimensionality Reduction Through Sub-Space Mapping for Nearest Neighbor Algorithms}},
author = {Payne, Terry R. and Edwards, Peter},
booktitle = {European Conference on Machine Learning},
year = {2000},
pages = {331-343},
doi = {10.1007/3-540-45164-1_35},
url = {https://mlanthology.org/ecmlpkdd/2000/payne2000ecml-dimensionality/}
}