A Practical Approach to Feature Selection
Abstract
In real-world concept learning problems, the representation of data often uses many features, only a few of which may be related to the target concept. In this situation, feature selection is important both to speed up learning and to improve concept quality. A new feature selection algorithm Relief uses a statistical method and avoids heuristic search. Relief requires linear time in the number of given features and the number of training instances regardless of the target concept to be learned. Although the algorithm does not necessarily find the smallest subset of features, the size tends to be small because only statistically relevant features are selected. This paper focuses on empirical test results in two artificial domains; the LED Display domain and the Parity domain with and without noise. Comparison with other feature selection algorithms shows Relief's advantages in terms of learning time and the accuracy of the learned concept, suggesting Relief's practicality.
Cite
Text
Kira and Rendell. "A Practical Approach to Feature Selection." International Conference on Machine Learning, 1992. doi:10.1016/B978-1-55860-247-2.50037-1Markdown
[Kira and Rendell. "A Practical Approach to Feature Selection." International Conference on Machine Learning, 1992.](https://mlanthology.org/icml/1992/kira1992icml-practical/) doi:10.1016/B978-1-55860-247-2.50037-1BibTeX
@inproceedings{kira1992icml-practical,
title = {{A Practical Approach to Feature Selection}},
author = {Kira, Kenji and Rendell, Larry A.},
booktitle = {International Conference on Machine Learning},
year = {1992},
pages = {249-256},
doi = {10.1016/B978-1-55860-247-2.50037-1},
url = {https://mlanthology.org/icml/1992/kira1992icml-practical/}
}