PIQA: Reasoning About Physical Commonsense in Natural Language

Abstract

To apply eyeshadow without a brush, should I use a cotton swab or a toothpick? Questions requiring this kind of physical commonsense pose a challenge to today's natural language understanding systems. While recent pretrained models (such as BERT) have made progress on question answering over more abstract domains – such as news articles and encyclopedia entries, where text is plentiful – in more physical domains, text is inherently limited due to reporting bias. Can AI systems learn to reliably answer physical commonsense questions without experiencing the physical world?In this paper, we introduce the task of physical commonsense reasoning and a corresponding benchmark dataset Physical Interaction: Question Answering or PIQA. Though humans find the dataset easy (95% accuracy), large pretrained models struggle (∼75%). We provide analysis about the dimensions of knowledge that existing models lack, which offers significant opportunities for future research.

Cite

Text

Bisk et al. "PIQA: Reasoning About Physical Commonsense in Natural Language." AAAI Conference on Artificial Intelligence, 2020. doi:10.1609/AAAI.V34I05.6239

Markdown

[Bisk et al. "PIQA: Reasoning About Physical Commonsense in Natural Language." AAAI Conference on Artificial Intelligence, 2020.](https://mlanthology.org/aaai/2020/bisk2020aaai-piqa/) doi:10.1609/AAAI.V34I05.6239

BibTeX

@inproceedings{bisk2020aaai-piqa,
  title     = {{PIQA: Reasoning About Physical Commonsense in Natural Language}},
  author    = {Bisk, Yonatan and Zellers, Rowan and Le Bras, Ronan and Gao, Jianfeng and Choi, Yejin},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2020},
  pages     = {7432-7439},
  doi       = {10.1609/AAAI.V34I05.6239},
  url       = {https://mlanthology.org/aaai/2020/bisk2020aaai-piqa/}
}