Learning (Complex) Structural Descriptions from Examples

Abstract

From a description of several concepts and their examples, a methodology for the construction of an optimized recognition tree is proposed. The optimization is relative to two factors. First, the recognition tree does not grow as rapidly as the number of different examples (only “relevant” details are remembered). Second, given a learning set, the recognition tree must be adaptable enough to be modified by a new example, without restarting the whole learning process. The key tool used in order to achieve this goal has been named the “most promising partition.” Most of this paper is devoted to the definition and use of this notion. Rather than constructing a recognition tree based on a description of concepts as is usually done in inductive learning [S. A. Vere,Artificial Intelligence J. 14, 1980, 139–164], a tree using a description of thedifferences between concepts is obtained.

Cite

Text

Loisel and Kodratoff. "Learning (Complex) Structural Descriptions from Examples." International Joint Conference on Artificial Intelligence, 1981. doi:10.1016/0734-189X(84)90032-X

Markdown

[Loisel and Kodratoff. "Learning (Complex) Structural Descriptions from Examples." International Joint Conference on Artificial Intelligence, 1981.](https://mlanthology.org/ijcai/1981/loisel1981ijcai-learning/) doi:10.1016/0734-189X(84)90032-X

BibTeX

@inproceedings{loisel1981ijcai-learning,
  title     = {{Learning (Complex) Structural Descriptions from Examples}},
  author    = {Loisel, Regine and Kodratoff, Yves},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {1981},
  pages     = {141-143},
  doi       = {10.1016/0734-189X(84)90032-X},
  url       = {https://mlanthology.org/ijcai/1981/loisel1981ijcai-learning/}
}