Neural Models for Sequence Chunking

Abstract

Many natural language understanding (NLU) tasks, such as shallow parsing (i.e., text chunking) and semantic slot filling, require the assignment of representative labels to the meaningful chunks in a sentence. Most of the current deep neural network (DNN) based methods consider these tasks as a sequence labeling problem, in which a word, rather than a chunk, is treated as the basic unit for labeling. These chunks are then inferred by the standard IOB (Inside-Outside- Beginning) labels. In this paper, we propose an alternative approach by investigating the use of DNN for sequence chunking, and propose three neural models so that each chunk can be treated as a complete unit for labeling. Experimental results show that the proposed neural sequence chunking models can achieve start-of-the-art performance on both the text chunking and slot filling tasks.

Cite

Text

Zhai et al. "Neural Models for Sequence Chunking." AAAI Conference on Artificial Intelligence, 2017. doi:10.1609/AAAI.V31I1.10995

Markdown

[Zhai et al. "Neural Models for Sequence Chunking." AAAI Conference on Artificial Intelligence, 2017.](https://mlanthology.org/aaai/2017/zhai2017aaai-neural/) doi:10.1609/AAAI.V31I1.10995

BibTeX

@inproceedings{zhai2017aaai-neural,
  title     = {{Neural Models for Sequence Chunking}},
  author    = {Zhai, Feifei and Potdar, Saloni and Xiang, Bing and Zhou, Bowen},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2017},
  pages     = {3365-3371},
  doi       = {10.1609/AAAI.V31I1.10995},
  url       = {https://mlanthology.org/aaai/2017/zhai2017aaai-neural/}
}