JAKET: Joint Pre-Training of Knowledge Graph and Language Understanding
Abstract
Knowledge graphs (KGs) contain rich information about world knowledge, entities, and relations. Thus, they can be great supplements to existing pre-trained language models. However, it remains a challenge to efficiently integrate information from KG into language modeling. And the understanding of a knowledge graph requires related context. We propose a novel joint pre-training framework, JAKET, to model both the knowledge graph and language. The knowledge module and language module provide essential information to mutually assist each other: the knowledge module produces embeddings for entities in text while the language module generates context-aware initial embeddings for entities and relations in the graph. Our design enables the pre-trained model to easily adapt to unseen knowledge graphs in new domains. Experiment results on several knowledge-aware NLP tasks show that our proposed framework achieves superior performance by effectively leveraging knowledge in language understanding.
Cite
Text
Yu et al. "JAKET: Joint Pre-Training of Knowledge Graph and Language Understanding." AAAI Conference on Artificial Intelligence, 2022. doi:10.1609/AAAI.V36I10.21417Markdown
[Yu et al. "JAKET: Joint Pre-Training of Knowledge Graph and Language Understanding." AAAI Conference on Artificial Intelligence, 2022.](https://mlanthology.org/aaai/2022/yu2022aaai-jaket/) doi:10.1609/AAAI.V36I10.21417BibTeX
@inproceedings{yu2022aaai-jaket,
title = {{JAKET: Joint Pre-Training of Knowledge Graph and Language Understanding}},
author = {Yu, Donghan and Zhu, Chenguang and Yang, Yiming and Zeng, Michael},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2022},
pages = {11630-11638},
doi = {10.1609/AAAI.V36I10.21417},
url = {https://mlanthology.org/aaai/2022/yu2022aaai-jaket/}
}