Text Rewriting Improves Semantic Role Labeling

Abstract

Large-scale annotated corpora are a prerequisite to developing high-performance NLP systems. Such corpora are expensive to produce, limited in size, often demanding linguistic expertise. In this paper we use text rewriting as a means of increasing the amount of labeled data available for model training. Our method uses automatically extracted rewrite rules from comparable corpora and bitexts to generate multiple versions of sentences annotated with gold standard labels. We apply this idea to semantic role labeling and show that a model trained on rewritten data outperforms the state of the art on the CoNLL-2009 benchmark dataset.

Cite

Text

Woodsend and Lapata. "Text Rewriting Improves Semantic Role Labeling." Journal of Artificial Intelligence Research, 2014. doi:10.1613/JAIR.4431

Markdown

[Woodsend and Lapata. "Text Rewriting Improves Semantic Role Labeling." Journal of Artificial Intelligence Research, 2014.](https://mlanthology.org/jair/2014/woodsend2014jair-text/) doi:10.1613/JAIR.4431

BibTeX

@article{woodsend2014jair-text,
  title     = {{Text Rewriting Improves Semantic Role Labeling}},
  author    = {Woodsend, Kristian and Lapata, Mirella},
  journal   = {Journal of Artificial Intelligence Research},
  year      = {2014},
  pages     = {133-164},
  doi       = {10.1613/JAIR.4431},
  volume    = {51},
  url       = {https://mlanthology.org/jair/2014/woodsend2014jair-text/}
}