Contextualized Scene Imagination for Generative Commonsense Reasoning
Abstract
Humans use natural language to compose common concepts from their environment into plausible, day-to-day scene descriptions. However, such generative commonsense reasoning (GCSR) skills are lacking in state-of-the-art text generation methods. Descriptive sentences about arbitrary concepts generated by neural text generation models (e.g., pre-trained text-to-text Transformers) are often grammatically fluent but may not correspond to human common sense, largely due to their lack of mechanisms to capture concept relations, to identify implicit concepts, and to perform generalizable reasoning about unseen concept compositions. In this paper, we propose an Imagine-and-Verbalize (I\&V) method, which learns to imagine a relational scene knowledge graph (SKG) with relations between the input concepts, and leverage the SKG as a constraint when generating a plausible scene description. We collect and harmonize a set of knowledge resources from different domains and modalities, providing a rich auxiliary supervision signal for I\&V. The experiments demonstrate the effectiveness of I\&V in improving language models on both concept-to-sentence and concept-to-story generation tasks, while enabling the model to learn well from fewer task examples and generate SKGs that make common sense to human annotators.
Cite
Text
Wang et al. "Contextualized Scene Imagination for Generative Commonsense Reasoning." International Conference on Learning Representations, 2022.Markdown
[Wang et al. "Contextualized Scene Imagination for Generative Commonsense Reasoning." International Conference on Learning Representations, 2022.](https://mlanthology.org/iclr/2022/wang2022iclr-contextualized/)BibTeX
@inproceedings{wang2022iclr-contextualized,
title = {{Contextualized Scene Imagination for Generative Commonsense Reasoning}},
author = {Wang, PeiFeng and Zamora, Jonathan and Liu, Junfeng and Ilievski, Filip and Chen, Muhao and Ren, Xiang},
booktitle = {International Conference on Learning Representations},
year = {2022},
url = {https://mlanthology.org/iclr/2022/wang2022iclr-contextualized/}
}