Learning from Data with Bounded Inconsistency
Abstract
This paper presents an approach to concept learning from inconsistent data that foregoes a solution to the full-blown problem and instead considers a subcase, called bounded inconsistency. Data are said to have bounded inconsistency when some small perturbation to the description of any bad instance will result in a good instance. The key idea to learning in the presence of bounded inconsistency is to not only consider concept definitions that correctly classify all the training data, but also those that miss some of the data by only a small amount. The approach is implemented using a generalization of Mitchell's version-space approach to concept learning.
Cite
Text
Hirsh. "Learning from Data with Bounded Inconsistency." International Conference on Machine Learning, 1990. doi:10.1016/B978-1-55860-141-3.50008-0Markdown
[Hirsh. "Learning from Data with Bounded Inconsistency." International Conference on Machine Learning, 1990.](https://mlanthology.org/icml/1990/hirsh1990icml-learning/) doi:10.1016/B978-1-55860-141-3.50008-0BibTeX
@inproceedings{hirsh1990icml-learning,
title = {{Learning from Data with Bounded Inconsistency}},
author = {Hirsh, Haym},
booktitle = {International Conference on Machine Learning},
year = {1990},
pages = {32-39},
doi = {10.1016/B978-1-55860-141-3.50008-0},
url = {https://mlanthology.org/icml/1990/hirsh1990icml-learning/}
}