A Minimal Encoding Approach to Feature Discovery
Abstract
This paper discusses unsupervised learning of orthogonal concepts on relational data. Relational predicates, while formally equivalent to the features of the concept-learning literature, are not a good basis for defining concepts. Hence the current task demands a much larger search space than traditional concept learning algorithms, the sort of space explored by connectionist algorithms. However the intended application, using the discovered concepts in the Cyc knowledge base, requires that the concepts be interpretable by a human, an ability not yet realized with connectionist algorithms. Interpretability is aided by including a characterization of simplicity in the evaluation function. For Hinton's Family Relations data, we do find cleaner, more intuitive features. Yet when the solutions are not known in advance, the difficulty of interpreting even features meeting the simplicity criteria calls into question the usefulness of any reformulation algorithm that creates radically new primitives in a knowledge-based setting. At the very least, much more sophisticated explanation tools are needed.
Cite
Text
Derthick. "A Minimal Encoding Approach to Feature Discovery." AAAI Conference on Artificial Intelligence, 1991.Markdown
[Derthick. "A Minimal Encoding Approach to Feature Discovery." AAAI Conference on Artificial Intelligence, 1991.](https://mlanthology.org/aaai/1991/derthick1991aaai-minimal/)BibTeX
@inproceedings{derthick1991aaai-minimal,
title = {{A Minimal Encoding Approach to Feature Discovery}},
author = {Derthick, Mark},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {1991},
pages = {565-571},
url = {https://mlanthology.org/aaai/1991/derthick1991aaai-minimal/}
}