Comparing Several Linear-Threshold Learning Algorithms on Tasks Involving Superfluous Attributes
Abstract
Using simulations, we compare several linear-threshold learning algorithms that differ greatly in the effect of superfluous attributes on their learning abilities. These include a Bayesian algorithm for conditionally independent attributes and two mistake-driven algorithms (algorithms that learn only from trials in which they predict incorrectly), Winnow and the Perceptron algorithm. We also look at a mistake-driven modification of the Bayesian algorithm. When there are many superfluous attributes, Winnow makes the fewest mistakes; in our experiments it takes a great many such attributes to make this difference marked. With the addition of what we call a checking procedure, Winnow was able to eventually get within twice the optimal loss rate in all of the experiments that we have focused on, and usually much closer.
Cite
Text
Littlestone. "Comparing Several Linear-Threshold Learning Algorithms on Tasks Involving Superfluous Attributes." International Conference on Machine Learning, 1995. doi:10.1016/B978-1-55860-377-6.50051-7Markdown
[Littlestone. "Comparing Several Linear-Threshold Learning Algorithms on Tasks Involving Superfluous Attributes." International Conference on Machine Learning, 1995.](https://mlanthology.org/icml/1995/littlestone1995icml-comparing/) doi:10.1016/B978-1-55860-377-6.50051-7BibTeX
@inproceedings{littlestone1995icml-comparing,
title = {{Comparing Several Linear-Threshold Learning Algorithms on Tasks Involving Superfluous Attributes}},
author = {Littlestone, Nick},
booktitle = {International Conference on Machine Learning},
year = {1995},
pages = {353-361},
doi = {10.1016/B978-1-55860-377-6.50051-7},
url = {https://mlanthology.org/icml/1995/littlestone1995icml-comparing/}
}