Knowledge Graph Representation with Jointly Structural and Textual Encoding

Abstract

The objective of knowledge graph embedding is to encode both entities and relations of knowledge graphs into continuous low-dimensional vector spaces. Previously, most works focused on symbolic representation of knowledge graph with structure information, which can not handle new entities or entities with few facts well. In this paper, we propose a novel deep architecture to utilize both structural and textual information of entities. Specifically, we introduce three neural models to encode the valuable information from text description of entity, among which an attentive model can select related information as needed. Then, a gating mechanism is applied to integrate representations of structure and text into a unified architecture. Experiments show that our models outperform baseline and obtain state-of-the-art results on link prediction and triplet classification tasks.

Cite

Text

Xu et al. "Knowledge Graph Representation with Jointly Structural and Textual Encoding." International Joint Conference on Artificial Intelligence, 2017. doi:10.24963/IJCAI.2017/183

Markdown

[Xu et al. "Knowledge Graph Representation with Jointly Structural and Textual Encoding." International Joint Conference on Artificial Intelligence, 2017.](https://mlanthology.org/ijcai/2017/xu2017ijcai-knowledge/) doi:10.24963/IJCAI.2017/183

BibTeX

@inproceedings{xu2017ijcai-knowledge,
  title     = {{Knowledge Graph Representation with Jointly Structural and Textual Encoding}},
  author    = {Xu, Jiacheng and Qiu, Xipeng and Chen, Kan and Huang, Xuanjing},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2017},
  pages     = {1318-1324},
  doi       = {10.24963/IJCAI.2017/183},
  url       = {https://mlanthology.org/ijcai/2017/xu2017ijcai-knowledge/}
}