Inductive Relation Prediction by BERT
Abstract
Relation prediction in knowledge graphs is dominated by embedding based methods which mainly focus on the transductive setting. Unfortunately, they are not able to handle inductive learning where unseen entities and relations are present and cannot take advantage of prior knowledge. Furthermore, their inference process is not easily explainable. In this work, we propose an all-in-one solution, called BERTRL (BERT-based Relational Learning), which leverages pre-trained language model and fine-tunes it by taking relation instances and their possible reasoning paths as training samples. BERTRL outperforms the SOTAs in 15 out of 18 cases in both inductive and transductive settings. Meanwhile, it demonstrates strong generalization capability in few-shot learning and is explainable. The data and code can be found at https://github.com/zhw12/BERTRL.
Cite
Text
Zha et al. "Inductive Relation Prediction by BERT." AAAI Conference on Artificial Intelligence, 2022. doi:10.1609/AAAI.V36I5.20537Markdown
[Zha et al. "Inductive Relation Prediction by BERT." AAAI Conference on Artificial Intelligence, 2022.](https://mlanthology.org/aaai/2022/zha2022aaai-inductive/) doi:10.1609/AAAI.V36I5.20537BibTeX
@inproceedings{zha2022aaai-inductive,
title = {{Inductive Relation Prediction by BERT}},
author = {Zha, Hanwen and Chen, Zhiyu and Yan, Xifeng},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2022},
pages = {5923-5931},
doi = {10.1609/AAAI.V36I5.20537},
url = {https://mlanthology.org/aaai/2022/zha2022aaai-inductive/}
}