Feature Selection from Huge Feature Sets
Abstract
The number of features that can be completed over an image is, for practical purposes, limitless. Unfortunately, the number of features that can be computed and exploited by most computer vision systems is considerably less. As a result, it is important to develop techniques for selecting features from very large data sets that include many irrelevant or redundant features. This work addresses the feature selection problem by proposing a three-step algorithm. The first step uses a variation of the well known Relief algorithm to remove irrelevance; the second step clusters features using K-means to remove redundancy; and the third step is a standard combinatorial feature selection algorithm. This three-step combination is shown to be more effective than standard feature selection algorithms for large data sets with lots of irrelevant and redundant features. It is also shown to he no worse than standard techniques for data sets that do not have these properties. Finally, we show a third experiment in which a data set with 4096 features is reduced to 5% of its original size with very little information loss.
Cite
Text
Bins and Draper. "Feature Selection from Huge Feature Sets." IEEE/CVF International Conference on Computer Vision, 2001. doi:10.1109/ICCV.2001.937619Markdown
[Bins and Draper. "Feature Selection from Huge Feature Sets." IEEE/CVF International Conference on Computer Vision, 2001.](https://mlanthology.org/iccv/2001/bins2001iccv-feature/) doi:10.1109/ICCV.2001.937619BibTeX
@inproceedings{bins2001iccv-feature,
title = {{Feature Selection from Huge Feature Sets}},
author = {Bins, José and Draper, Bruce A.},
booktitle = {IEEE/CVF International Conference on Computer Vision},
year = {2001},
pages = {159-165},
doi = {10.1109/ICCV.2001.937619},
url = {https://mlanthology.org/iccv/2001/bins2001iccv-feature/}
}