Learning Entity Representations for Few-Shot Reconstruction of Wikipedia Categories

Abstract

Language modeling tasks, in which words are predicted on the basis of a local context, have been very effective for learning word embeddings and context dependent representations of phrases. Motivated by the observation that efforts to code world knowledge into machine readable knowledge bases tend to be entity-centric, we investigate the use of a fill-in-the-blank task to learn context independent representations of entities from the contexts in which those entities were mentioned. We show that large scale training of neural models allows us to learn extremely high fidelity entity typing information, which we demonstrate with few-shot reconstruction of Wikipedia categories. Our learning approach is powerful enough to encode specialized topics such as Giro d’Italia cyclists.

Cite

Text

Ling et al. "Learning Entity Representations for Few-Shot Reconstruction of Wikipedia Categories." ICLR 2019 Workshops: LLD, 2019.

Markdown

[Ling et al. "Learning Entity Representations for Few-Shot Reconstruction of Wikipedia Categories." ICLR 2019 Workshops: LLD, 2019.](https://mlanthology.org/iclrw/2019/ling2019iclrw-learning/)

BibTeX

@inproceedings{ling2019iclrw-learning,
  title     = {{Learning Entity Representations for Few-Shot Reconstruction of Wikipedia Categories}},
  author    = {Ling, Jeffrey and FitzGerald, Nicholas and Soares, Livio Baldini and Weiss, David and Kwiatkowski, Tom},
  booktitle = {ICLR 2019 Workshops: LLD},
  year      = {2019},
  url       = {https://mlanthology.org/iclrw/2019/ling2019iclrw-learning/}
}