Reducing Redundant Learning
Abstract
A principle of learning is proposed to help design and improve incremental learning algorithms. The principle of nonredundant learning is defined and related to the basic measures of number of instances required, processing time per instance, and prediction accuracy. An algorithm, CORA, was designed to exploit this principle, and its empirical behavior was compared to another algorithm that does not exploit the principle in the same way. A limitation of CORA is presented that suggests a second example of the learning principle. The paper argues for the significance of CORA as an incremental learner and for the more general significance of the learning principle.
Cite
Text
Martin. "Reducing Redundant Learning." International Conference on Machine Learning, 1989. doi:10.1016/b978-1-55860-036-2.50100-4Markdown
[Martin. "Reducing Redundant Learning." International Conference on Machine Learning, 1989.](https://mlanthology.org/icml/1989/martin1989icml-reducing/) doi:10.1016/b978-1-55860-036-2.50100-4BibTeX
@inproceedings{martin1989icml-reducing,
title = {{Reducing Redundant Learning}},
author = {Martin, Joel D.},
booktitle = {International Conference on Machine Learning},
year = {1989},
pages = {396-399},
doi = {10.1016/b978-1-55860-036-2.50100-4},
url = {https://mlanthology.org/icml/1989/martin1989icml-reducing/}
}