Learning of Compositional Hierarchies by Data-Driven Chunking
Abstract
ly, we proceed by repeatedly identifying frequently occurring substructures and forming new nodes for them in the hierarchy. We have developed two systems, both implemented as symmetric, recurrent neural networks. Both can be viewed as follow-ups to the successful interactive activation model (IAM) (McClelland & Rumelhart 1981), extending it in a number of ways, primarily to incorporate learning. The first system and more details on CH learning in general are described in (Pfleger 1998). The second system uses a Boltzmann machine, extended to handle categorical values and weight sharing. As with IAM, the network encodes a CH directly using a localist representation. Weight sharing and "hardware" duplication (unrolling) are used to model atomic symbols and chunks at different positions. Atomic symbols (the data) are visible variables and the chunks are hidden variables. Chunking is accomplished as an on-line structure modification rule. Specifically, new chunks are created when the weig...
Cite
Text
Pfleger. "Learning of Compositional Hierarchies by Data-Driven Chunking." AAAI Conference on Artificial Intelligence, 1999.Markdown
[Pfleger. "Learning of Compositional Hierarchies by Data-Driven Chunking." AAAI Conference on Artificial Intelligence, 1999.](https://mlanthology.org/aaai/1999/pfleger1999aaai-learning/)BibTeX
@inproceedings{pfleger1999aaai-learning,
title = {{Learning of Compositional Hierarchies by Data-Driven Chunking}},
author = {Pfleger, Karl},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {1999},
pages = {977},
url = {https://mlanthology.org/aaai/1999/pfleger1999aaai-learning/}
}