Selecting Appropriate Representations for Learning from Examples
Abstract
The task of inductive from examples places constraints on the of training instances and concepts. These constraints are different from, and often incompatible with, the constraints placed on the by the task. This incompatibility explains why previous researchers have found it so difficult to construct good representations for inductive learning—they were trying to achieve a compromise between these two sets of constraints. To address this problem, we have developed a system that employs two different representations: one for and one for performance. The system accepts training instances in the performance representation, converts them into a learning representation where they are inductively generalized, and then maps the learned concept back into the performance representation. The advantages of this approach are (a) many fewer training instances are required to learn the concept, (b) the biases of the program are very simple, and (c) the system requires virtually no vocabulary engineering to learn concepts in a new domain.
Cite
Text
Flann and Dietterich. "Selecting Appropriate Representations for Learning from Examples." AAAI Conference on Artificial Intelligence, 1986.Markdown
[Flann and Dietterich. "Selecting Appropriate Representations for Learning from Examples." AAAI Conference on Artificial Intelligence, 1986.](https://mlanthology.org/aaai/1986/flann1986aaai-selecting/)BibTeX
@inproceedings{flann1986aaai-selecting,
title = {{Selecting Appropriate Representations for Learning from Examples}},
author = {Flann, Nicholas S. and Dietterich, Thomas G.},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {1986},
pages = {460-466},
url = {https://mlanthology.org/aaai/1986/flann1986aaai-selecting/}
}