Polyhedral Outer Approximations with Application to Natural Language Parsing
Abstract
Recent approaches to learning structured predictors often require approximate inference for tractability; yet its effects on the learned model are unclear. Meanwhile, most learning algorithms act as if computational cost was constant within the model class. This paper sheds some light on the first issue by establishing risk bounds for max-margin learning with LP relaxed inference, and addresses the second issue by proposing a new paradigm that attempts to penalize "time-consuming" hypotheses. Our analysis relies on a geometric characterization of the outer polyhedra associated with the LP relaxation. We then apply these techniques to the problem of dependency parsing, for which a concise LP formulation is provided that handles non-local output features. A significant improvement is shown over arc-factored models.
Cite
Text
Martins et al. "Polyhedral Outer Approximations with Application to Natural Language Parsing." International Conference on Machine Learning, 2009. doi:10.1145/1553374.1553466Markdown
[Martins et al. "Polyhedral Outer Approximations with Application to Natural Language Parsing." International Conference on Machine Learning, 2009.](https://mlanthology.org/icml/2009/martins2009icml-polyhedral/) doi:10.1145/1553374.1553466BibTeX
@inproceedings{martins2009icml-polyhedral,
title = {{Polyhedral Outer Approximations with Application to Natural Language Parsing}},
author = {Martins, André F. T. and Smith, Noah A. and Xing, Eric P.},
booktitle = {International Conference on Machine Learning},
year = {2009},
pages = {713-720},
doi = {10.1145/1553374.1553466},
url = {https://mlanthology.org/icml/2009/martins2009icml-polyhedral/}
}