Commonsense Knowledge Reasoning and Generation with Pre-Trained Language Models: A Survey

Abstract

While commonsense knowledge acquisition and reasoning has traditionally been a core research topic in the knowledge representation and reasoning community, recent years have seen a surge of interest in the natural language processing community in developing pre-trained models and testing their ability to address a variety of newly designed commonsense knowledge reasoning and generation tasks. This paper presents a survey of these tasks, discusses the strengths and weaknesses of state-of-the-art pre-trained models for commonsense reasoning and generation as revealed by these tasks, and reflects on future research directions.

Cite

Text

Bhargava and Ng. "Commonsense Knowledge Reasoning and Generation with Pre-Trained Language Models: A Survey." AAAI Conference on Artificial Intelligence, 2022. doi:10.1609/AAAI.V36I11.21496

Markdown

[Bhargava and Ng. "Commonsense Knowledge Reasoning and Generation with Pre-Trained Language Models: A Survey." AAAI Conference on Artificial Intelligence, 2022.](https://mlanthology.org/aaai/2022/bhargava2022aaai-commonsense/) doi:10.1609/AAAI.V36I11.21496

BibTeX

@inproceedings{bhargava2022aaai-commonsense,
  title     = {{Commonsense Knowledge Reasoning and Generation with Pre-Trained Language Models: A Survey}},
  author    = {Bhargava, Prajjwal and Ng, Vincent},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2022},
  pages     = {12317-12325},
  doi       = {10.1609/AAAI.V36I11.21496},
  url       = {https://mlanthology.org/aaai/2022/bhargava2022aaai-commonsense/}
}