Polynomial-Time Learning with Version Spaces
Abstract
Although version spaces provide a useful conceptual tool for inductive concept learning, they often face severe computational difficulties when implemented. For example, the G set of traditional boundary-set implementations of version spaces can have size exponential in the amount of data for even the most simple conjunctive description languages [ Haussler, 1988 ] . This paper presents a new representation for version spaces that is more general than the traditional boundary-set representation, yet has worst-case time complexity that is polynomial in the amount of data when used for learning from attribute-value data with treestructured feature hierarchies (which includes languages like Haussler's). The central idea underlying this new representation is to maintain the traditional S boundary set as usual, but use a list N of negative data rather than keeping a G set as is typically done. 1. Introduction Concept learning can be viewed as a problem of search [ Simon and Lea, 1974; Mit...
Cite
Text
Hirsh. "Polynomial-Time Learning with Version Spaces." AAAI Conference on Artificial Intelligence, 1992.Markdown
[Hirsh. "Polynomial-Time Learning with Version Spaces." AAAI Conference on Artificial Intelligence, 1992.](https://mlanthology.org/aaai/1992/hirsh1992aaai-polynomial/)BibTeX
@inproceedings{hirsh1992aaai-polynomial,
title = {{Polynomial-Time Learning with Version Spaces}},
author = {Hirsh, Haym},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {1992},
pages = {117-122},
url = {https://mlanthology.org/aaai/1992/hirsh1992aaai-polynomial/}
}