Graded Grammaticality in Prediction Fractal Machines
Abstract
We introduce a novel method of constructing language models, which avoids some of the problems associated with recurrent neu(cid:173) ral networks. The method of creating a Prediction Fractal Machine (PFM) [1] is briefly described and some experiments are presented which demonstrate the suitability of PFMs for language modeling. PFMs distinguish reliably between minimal pairs, and their be(cid:173) havior is consistent with the hypothesis [4] that wellformedness is 'graded' not absolute. A discussion of their potential to offer fresh insights into language acquisition and processing follows.
Cite
Text
Parfitt et al. "Graded Grammaticality in Prediction Fractal Machines." Neural Information Processing Systems, 1999.Markdown
[Parfitt et al. "Graded Grammaticality in Prediction Fractal Machines." Neural Information Processing Systems, 1999.](https://mlanthology.org/neurips/1999/parfitt1999neurips-graded/)BibTeX
@inproceedings{parfitt1999neurips-graded,
title = {{Graded Grammaticality in Prediction Fractal Machines}},
author = {Parfitt, Shan and Tiño, Peter and Dorffner, Georg},
booktitle = {Neural Information Processing Systems},
year = {1999},
pages = {52-58},
url = {https://mlanthology.org/neurips/1999/parfitt1999neurips-graded/}
}