Empowering Language Models with Knowledge Graph Reasoning for Question Answering

Abstract

Answering open-domain questions requires world knowledge about in-context entities. As pre-trained Language Models (LMs) lack the power to store required knowledge, external knowledge sources, such as knowledge graphs, are often used to augment LMs. In this work, we propose knOwledge REasOning empowered Language Model (OREO-LM), which consists of a novel Knowledge Interaction Layer that can be flexibly plugged into existing Transformer-based LMs to interact with a differentiable Knowledge Graph Reasoning module collaboratively. In this way, LM guides KG to walk towards the desired answer, while the retrieved knowledge improves LM. By adopting OREO-LM to RoBERTa and T5, we show significant performance gain, achieving state-of-art results in the Closed-Book setting. The performance enhancement is mainly from the KG reasoning's capacity to infer missing relational facts. In addition, OREO-LM provides reasoning paths as rationales to interpret the model's decision.

Cite

Text

Hu et al. "Empowering Language Models with Knowledge Graph Reasoning for Question Answering." NeurIPS 2022 Workshops: GLFrontiers, 2022.

Markdown

[Hu et al. "Empowering Language Models with Knowledge Graph Reasoning for Question Answering." NeurIPS 2022 Workshops: GLFrontiers, 2022.](https://mlanthology.org/neuripsw/2022/hu2022neuripsw-empowering/)

BibTeX

@inproceedings{hu2022neuripsw-empowering,
  title     = {{Empowering Language Models with Knowledge Graph Reasoning for Question Answering}},
  author    = {Hu, Ziniu and Xu, Yichong and Yu, Wenhao and Wang, Shuohang and Yang, Ziyi and Zhu, Chenguang and Chang, Kai-Wei and Sun, Yizhou},
  booktitle = {NeurIPS 2022 Workshops: GLFrontiers},
  year      = {2022},
  url       = {https://mlanthology.org/neuripsw/2022/hu2022neuripsw-empowering/}
}