Paragraph-Level Commonsense Transformers with Recurrent Memory

Abstract

Human understanding of narrative texts requires making commonsense inferences beyond what is stated in the text explicitly. A recent model, COMET, can generate such inferences along several dimensions such as pre- and post-conditions, motivations, and mental states of the participants. However, COMET was trained on short phrases, and is therefore discourse-agnostic. When presented with each sentence of a multi-sentence narrative, it might generate inferences that are inconsistent with the rest of the narrative. We present the task of discourse-aware commonsense inference. Given a sentence within a narrative, the goal is to generate commonsense inferences along predefined dimensions, while maintaining coherence with the rest of the narrative. Such large-scale paragraph-level annotation is hard to get and costly, so we use available sentence-level annotations to efficiently and automatically construct a distantly supervised corpus. Using this corpus, we train PARA-COMET, a discourse-aware model that incorporates paragraph-level information to generate coherent commonsense inferences from narratives. PARA-COMET captures both semantic knowledge pertaining to prior world knowledge, and episodic knowledge involving how current events relate to prior and future events in a narrative. Our results confirm that PARA-COMET outperforms the sentence-level baselines, particularly in generating inferences that are both coherent and novel.

Cite

Text

Gabriel et al. "Paragraph-Level Commonsense Transformers with Recurrent Memory." AAAI Conference on Artificial Intelligence, 2021. doi:10.1609/AAAI.V35I14.17521

Markdown

[Gabriel et al. "Paragraph-Level Commonsense Transformers with Recurrent Memory." AAAI Conference on Artificial Intelligence, 2021.](https://mlanthology.org/aaai/2021/gabriel2021aaai-paragraph/) doi:10.1609/AAAI.V35I14.17521

BibTeX

@inproceedings{gabriel2021aaai-paragraph,
  title     = {{Paragraph-Level Commonsense Transformers with Recurrent Memory}},
  author    = {Gabriel, Saadia and Bhagavatula, Chandra and Shwartz, Vered and Le Bras, Ronan and Forbes, Maxwell and Choi, Yejin},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2021},
  pages     = {12857-12865},
  doi       = {10.1609/AAAI.V35I14.17521},
  url       = {https://mlanthology.org/aaai/2021/gabriel2021aaai-paragraph/}
}