Getting the Most from Flawed Theories
Abstract
This paper introduces a new classification technique called degree-of-provedness classification, or DOP-classification. This technique exploits information implicit in the structure of a possibly incomplete or incorrect domain theory in order to improve classification accuracy. It is also shown how DOP-classification can be used to identify theories for which theory revision is unnecessary (because the unrevised theory can be used directly by DOP-classification to achieve near-perfect classification accuracy) or insufficient (because the initial theory is so flawed that it would be preferable to induce a new theory directly from examples).
Cite
Text
Koppel et al. "Getting the Most from Flawed Theories." International Conference on Machine Learning, 1994. doi:10.1016/B978-1-55860-335-6.50025-8Markdown
[Koppel et al. "Getting the Most from Flawed Theories." International Conference on Machine Learning, 1994.](https://mlanthology.org/icml/1994/koppel1994icml-getting/) doi:10.1016/B978-1-55860-335-6.50025-8BibTeX
@inproceedings{koppel1994icml-getting,
title = {{Getting the Most from Flawed Theories}},
author = {Koppel, Moshe and Segre, Alberto Maria and Feldman, Ronen},
booktitle = {International Conference on Machine Learning},
year = {1994},
pages = {139-147},
doi = {10.1016/B978-1-55860-335-6.50025-8},
url = {https://mlanthology.org/icml/1994/koppel1994icml-getting/}
}