SARDSRN: A Neural Network Shift-Reduce Parser
Abstract
Simple Recurrent Networks (SRNs) have been widely used in natural language tasks. SARDSRN extends the SRN by explicitly representing the input sequence in a SARDNET self-organizing map. The distributed SRN component leads to good generalization and robust cognitive properties, whereas the SARDNET map provides exact representations of the sentence constituents. This combination allows SARDSRN to learn to parse sentences with more complicated structure than can the SRN alone, and suggests that the approach could scale up to realistic natural language. 1 Introduction The subsymbolic approach (i.e. neural networks with distributed representations) to processing language is attractive for several reasons. First, it is inherently robust: the distributed representations display graceful degradation of performance in the presence of noise, damage, and incomplete or conflicting input (Miikkulainen 1993; St. John and McClelland 1990). Second, because computation in these networks is constraint-...
Cite
Text
Mayberry and Miikkulainen. "SARDSRN: A Neural Network Shift-Reduce Parser." International Joint Conference on Artificial Intelligence, 1999.Markdown
[Mayberry and Miikkulainen. "SARDSRN: A Neural Network Shift-Reduce Parser." International Joint Conference on Artificial Intelligence, 1999.](https://mlanthology.org/ijcai/1999/mayberry1999ijcai-sardsrn/)BibTeX
@inproceedings{mayberry1999ijcai-sardsrn,
title = {{SARDSRN: A Neural Network Shift-Reduce Parser}},
author = {Mayberry, Marshall R. and Miikkulainen, Risto},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {1999},
pages = {820-827},
url = {https://mlanthology.org/ijcai/1999/mayberry1999ijcai-sardsrn/}
}