A Reinforced Topic-Aware Convolutional Sequence-to-Sequence Model for Abstractive Text Summarization

Abstract

In this paper, we propose a deep learning approach to tackle the automatic summarization tasks by incorporating topic information into the convolutional sequence-to-sequence (ConvS2S) model and using self-critical sequence training (SCST) for optimization. Through jointly attending to topics and word-level alignment, our approach can improve coherence, diversity, and informativeness of generated summaries via a biased probability generation mechanism. On the other hand, reinforcement training, like SCST, directly optimizes the proposed model with respect to the non-differentiable metric ROUGE, which also avoids the exposure bias during inference. We carry out the experimental evaluation with state-of-the-art methods over the Gigaword, DUC-2004, and LCSTS datasets. The empirical results demonstrate the superiority of our proposed method in the abstractive summarization.

Cite

Text

Wang et al. "A Reinforced Topic-Aware Convolutional Sequence-to-Sequence Model for Abstractive Text Summarization." International Joint Conference on Artificial Intelligence, 2018. doi:10.24963/IJCAI.2018/619

Markdown

[Wang et al. "A Reinforced Topic-Aware Convolutional Sequence-to-Sequence Model for Abstractive Text Summarization." International Joint Conference on Artificial Intelligence, 2018.](https://mlanthology.org/ijcai/2018/wang2018ijcai-reinforced/) doi:10.24963/IJCAI.2018/619

BibTeX

@inproceedings{wang2018ijcai-reinforced,
  title     = {{A Reinforced Topic-Aware Convolutional Sequence-to-Sequence Model for Abstractive Text Summarization}},
  author    = {Wang, Li and Yao, Junlin and Tao, Yunzhe and Zhong, Li and Liu, Wei and Du, Qiang},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2018},
  pages     = {4453-4460},
  doi       = {10.24963/IJCAI.2018/619},
  url       = {https://mlanthology.org/ijcai/2018/wang2018ijcai-reinforced/}
}