Learning Context-Sensitive Word Embeddings with Neural Tensor Skip-Gram Model

Abstract

Distributed word representations have a rising interest in NLP community. Most of existing models assume only one vector for each individual word, which ignores polysemy and thus degrades their effectiveness for downstream tasks. To address this problem, some recent work adopts multi-prototype models to learn multiple embeddings per word type. In this paper, we distinguish the different senses of each word by their latent topics. We present a general architecture to learn the word and topic embeddings efficiently, which is an extension to the Skip-Gram model and can model the interaction between words and topics simultaneously. The experiments on the word similarity and text classification tasks show our model outperforms state-of-the-art methods.

Cite

Text

Liu et al. "Learning Context-Sensitive Word Embeddings with Neural Tensor Skip-Gram Model." International Joint Conference on Artificial Intelligence, 2015.

Markdown

[Liu et al. "Learning Context-Sensitive Word Embeddings with Neural Tensor Skip-Gram Model." International Joint Conference on Artificial Intelligence, 2015.](https://mlanthology.org/ijcai/2015/liu2015ijcai-learning/)

BibTeX

@inproceedings{liu2015ijcai-learning,
  title     = {{Learning Context-Sensitive Word Embeddings with Neural Tensor Skip-Gram Model}},
  author    = {Liu, Pengfei and Qiu, Xipeng and Huang, Xuanjing},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2015},
  pages     = {1284-1290},
  url       = {https://mlanthology.org/ijcai/2015/liu2015ijcai-learning/}
}