Sharper Bounds for the Hardness of Prototype and Feature Selection
Abstract
As pointed out by Blum [ Blu94 ], “nearly all results in Machine Learning [...] deal with problems of separating relevant from irrelevant information in some way”. This paper is concerned with structural complexity issues regarding the selection of relevant Prototypes or Features. We give the first results proving that both problems can be much harder than expected in the literature for various notions of relevance. In particular, the worst-case bounds achievable by any efficient algorithm are proven to be very large, most of the time not so far from trivial bounds. We think these results give a theoretical justification for the numerous heuristic approaches found in the literature to cope with these problems.
Cite
Text
Nock and Sebban. "Sharper Bounds for the Hardness of Prototype and Feature Selection." International Conference on Algorithmic Learning Theory, 2000. doi:10.1007/3-540-40992-0_17Markdown
[Nock and Sebban. "Sharper Bounds for the Hardness of Prototype and Feature Selection." International Conference on Algorithmic Learning Theory, 2000.](https://mlanthology.org/alt/2000/nock2000alt-sharper/) doi:10.1007/3-540-40992-0_17BibTeX
@inproceedings{nock2000alt-sharper,
title = {{Sharper Bounds for the Hardness of Prototype and Feature Selection}},
author = {Nock, Richard and Sebban, Marc},
booktitle = {International Conference on Algorithmic Learning Theory},
year = {2000},
pages = {224-237},
doi = {10.1007/3-540-40992-0_17},
url = {https://mlanthology.org/alt/2000/nock2000alt-sharper/}
}