Pretrained Encyclopedia: Weakly Supervised Knowledge-Pretrained Language Model

Abstract

Recent breakthroughs of pretrained language models have shown the effectiveness of self-supervised learning for a wide range of natural language processing (NLP) tasks. In addition to standard syntactic and semantic NLP tasks, pretrained models achieve strong improvements on tasks that involve real-world knowledge, suggesting that large-scale language modeling could be an implicit method to capture knowledge. In this work, we further investigate the extent to which pretrained models such as BERT capture knowledge using a zero-shot fact completion task. Moreover, we propose a simple yet effective weakly supervised pretraining objective, which explicitly forces the model to incorporate knowledge about real-world entities. Models trained with our new objective yield significant improvements on the fact completion task. When applied to downstream tasks, our model consistently outperforms BERT on four entity-related question answering datasets (i.e., WebQuestions, TriviaQA, SearchQA and Quasar-T) with an average 2.7 F1 improvements and a standard fine-grained entity typing dataset (i.e., FIGER) with 5.7 accuracy gains.

Cite

Text

Xiong et al. "Pretrained Encyclopedia: Weakly Supervised Knowledge-Pretrained Language Model." International Conference on Learning Representations, 2020.

Markdown

[Xiong et al. "Pretrained Encyclopedia: Weakly Supervised Knowledge-Pretrained Language Model." International Conference on Learning Representations, 2020.](https://mlanthology.org/iclr/2020/xiong2020iclr-pretrained/)

BibTeX

@inproceedings{xiong2020iclr-pretrained,
  title     = {{Pretrained Encyclopedia: Weakly Supervised Knowledge-Pretrained Language Model}},
  author    = {Xiong, Wenhan and Du, Jingfei and Wang, William Yang and Stoyanov, Veselin},
  booktitle = {International Conference on Learning Representations},
  year      = {2020},
  url       = {https://mlanthology.org/iclr/2020/xiong2020iclr-pretrained/}
}