Recurrent Greedy Parsing with Neural Networks
Abstract
In this paper, we propose a bottom-up greedy and purely discriminative syntactic parsing approach that relies only on a few simple features. The core of the architecture is a simple neural network architecture, trained with an objective function similar to that of a Conditional Random Field. This parser leverages continuous word vector representations to model the conditional distributions of context-aware syntactic rules. The learned distribution rules are naturally smoothed, thanks to the continuous nature of the input features and the model. Generalization accuracy compares favorably to existing generative or discriminative (non-reranking) parsers (despite the greedy nature of our approach), while the prediction speed is very fast.
Cite
Text
Legrand and Collobert. "Recurrent Greedy Parsing with Neural Networks." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2014. doi:10.1007/978-3-662-44851-9_9Markdown
[Legrand and Collobert. "Recurrent Greedy Parsing with Neural Networks." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2014.](https://mlanthology.org/ecmlpkdd/2014/legrand2014ecmlpkdd-recurrent/) doi:10.1007/978-3-662-44851-9_9BibTeX
@inproceedings{legrand2014ecmlpkdd-recurrent,
title = {{Recurrent Greedy Parsing with Neural Networks}},
author = {Legrand, Joël and Collobert, Ronan},
booktitle = {European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases},
year = {2014},
pages = {130-144},
doi = {10.1007/978-3-662-44851-9_9},
url = {https://mlanthology.org/ecmlpkdd/2014/legrand2014ecmlpkdd-recurrent/}
}