Discriminative Log-Linear Grammars with Latent Variables
Abstract
We demonstrate that log-linear grammars with latent variables can be practically trained using discriminative methods. Central to efficient discriminative training is a hierarchical pruning procedure which allows feature expectations to be effi- ciently approximated in a gradient-based procedure. We compare L1 and L2 reg- ularization and show that L1 regularization is superior, requiring fewer iterations to converge, and yielding sparser solutions. On full-scale treebank parsing exper- iments, the discriminative latent models outperform both the comparable genera- tive latent models as well as the discriminative non-latent baselines.
Cite
Text
Petrov and Klein. "Discriminative Log-Linear Grammars with Latent Variables." Neural Information Processing Systems, 2007.Markdown
[Petrov and Klein. "Discriminative Log-Linear Grammars with Latent Variables." Neural Information Processing Systems, 2007.](https://mlanthology.org/neurips/2007/petrov2007neurips-discriminative/)BibTeX
@inproceedings{petrov2007neurips-discriminative,
title = {{Discriminative Log-Linear Grammars with Latent Variables}},
author = {Petrov, Slav and Klein, Dan},
booktitle = {Neural Information Processing Systems},
year = {2007},
pages = {1153-1160},
url = {https://mlanthology.org/neurips/2007/petrov2007neurips-discriminative/}
}