Latent Relation Language Models
Abstract
In this paper, we propose Latent Relation Language Models (LRLMs), a class of language models that parameterizes the joint distribution over the words in a document and the entities that occur therein via knowledge graph relations. This model has a number of attractive properties: it not only improves language modeling performance, but is also able to annotate the posterior probability of entity spans for a given text through relations. Experiments demonstrate empirical improvements over both word-based language models and a previous approach that incorporates knowledge graph information. Qualitative analysis further demonstrates the proposed model's ability to learn to predict appropriate relations in context. 1
Cite
Text
Hayashi et al. "Latent Relation Language Models." AAAI Conference on Artificial Intelligence, 2020. doi:10.1609/AAAI.V34I05.6298Markdown
[Hayashi et al. "Latent Relation Language Models." AAAI Conference on Artificial Intelligence, 2020.](https://mlanthology.org/aaai/2020/hayashi2020aaai-latent/) doi:10.1609/AAAI.V34I05.6298BibTeX
@inproceedings{hayashi2020aaai-latent,
title = {{Latent Relation Language Models}},
author = {Hayashi, Hiroaki and Hu, Zecong and Xiong, Chenyan and Neubig, Graham},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2020},
pages = {7911-7918},
doi = {10.1609/AAAI.V34I05.6298},
url = {https://mlanthology.org/aaai/2020/hayashi2020aaai-latent/}
}