Improving Question Generation with Sentence-Level Semantic Matching and Answer Position Inferring

Abstract

Taking an answer and its context as input, sequence-to-sequence models have made considerable progress on question generation. However, we observe that these approaches often generate wrong question words or keywords and copy answer-irrelevant words from the input. We believe that lacking global question semantics and exploiting answer position-awareness not well are the key root causes. In this paper, we propose a neural question generation model with two general modules: sentence-level semantic matching and answer position inferring. Further, we enhance the initial state of the decoder by leveraging the answer-aware gated fusion mechanism. Experimental results demonstrate that our model outperforms the state-of-the-art (SOTA) models on SQuAD and MARCO datasets. Owing to its generality, our work also improves the existing models significantly.

Cite

Text

Ma et al. "Improving Question Generation with Sentence-Level Semantic Matching and Answer Position Inferring." AAAI Conference on Artificial Intelligence, 2020. doi:10.1609/AAAI.V34I05.6366

Markdown

[Ma et al. "Improving Question Generation with Sentence-Level Semantic Matching and Answer Position Inferring." AAAI Conference on Artificial Intelligence, 2020.](https://mlanthology.org/aaai/2020/ma2020aaai-improving/) doi:10.1609/AAAI.V34I05.6366

BibTeX

@inproceedings{ma2020aaai-improving,
  title     = {{Improving Question Generation with Sentence-Level Semantic Matching and Answer Position Inferring}},
  author    = {Ma, Xiyao and Zhu, Qile and Zhou, Yanlin and Li, Xiaolin},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2020},
  pages     = {8464-8471},
  doi       = {10.1609/AAAI.V34I05.6366},
  url       = {https://mlanthology.org/aaai/2020/ma2020aaai-improving/}
}