Translating Embeddings for Modeling Multi-Relational Data

Abstract

We consider the problem of embedding entities and relationships of multi-relational data in low-dimensional vector spaces. Our objective is to propose a canonical model which is easy to train, contains a reduced number of parameters and can scale up to very large databases. Hence, we propose, TransE, a method which models relationships by interpreting them as translations operating on the low-dimensional embeddings of the entities. Despite its simplicity, this assumption proves to be powerful since extensive experiments show that TransE significantly outperforms state-of-the-art methods in link prediction on two knowledge bases. Besides, it can be successfully trained on a large scale data set with 1M entities, 25k relationships and more than 17M training samples.

Cite

Text

Bordes et al. "Translating Embeddings for Modeling Multi-Relational Data." Neural Information Processing Systems, 2013.

Markdown

[Bordes et al. "Translating Embeddings for Modeling Multi-Relational Data." Neural Information Processing Systems, 2013.](https://mlanthology.org/neurips/2013/bordes2013neurips-translating/)

BibTeX

@inproceedings{bordes2013neurips-translating,
  title     = {{Translating Embeddings for Modeling Multi-Relational Data}},
  author    = {Bordes, Antoine and Usunier, Nicolas and Garcia-Duran, Alberto and Weston, Jason and Yakhnenko, Oksana},
  booktitle = {Neural Information Processing Systems},
  year      = {2013},
  pages     = {2787-2795},
  url       = {https://mlanthology.org/neurips/2013/bordes2013neurips-translating/}
}