Story Completion with Explicit Modeling of Commonsense Knowledge

Abstract

Growing up with bedtime tales, even children could easily tell how a story should develop; but selecting a coherent and reasonable ending for a story is still not easy for machines. To successfully choose an ending requires not only detailed analysis of the context, but also applying commonsense reasoning and basic knowledge. Previous work [8] has shown that language models trained on very large corpora could capture common sense in an implicit and hardto-interpret way. We explore another direction and present a novel method that explicitly incorporates commonsense knowledge from a structured dataset [11], and demonstrate the potential for improving story completion.

Cite

Text

Zhang et al. "Story Completion with Explicit Modeling of Commonsense Knowledge." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020. doi:10.1109/CVPRW50498.2020.00196

Markdown

[Zhang et al. "Story Completion with Explicit Modeling of Commonsense Knowledge." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020.](https://mlanthology.org/cvprw/2020/zhang2020cvprw-story/) doi:10.1109/CVPRW50498.2020.00196

BibTeX

@inproceedings{zhang2020cvprw-story,
  title     = {{Story Completion with Explicit Modeling of Commonsense Knowledge}},
  author    = {Zhang, Mingda and Ye, Keren and Hwa, Rebecca and Kovashka, Adriana},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2020},
  pages     = {1543-1546},
  doi       = {10.1109/CVPRW50498.2020.00196},
  url       = {https://mlanthology.org/cvprw/2020/zhang2020cvprw-story/}
}