Distributional Models and Deep Learning Embeddings: Combining the Best of Both Worlds

Abstract

There are two main approaches to the distributed representation of words: low-dimensional deep learning embeddings and high-dimensional distributional models, in which each dimension corresponds to a context word. In this paper, we combine these two approaches by learning embeddings based on distributional-model vectors - as opposed to one-hot vectors as is standardly done in deep learning. We show that the combined approach has better performance on a word relatedness judgment task.

Cite

Text

Sergienya and Schütze. "Distributional Models and Deep Learning Embeddings: Combining the Best of Both Worlds." International Conference on Learning Representations, 2014.

Markdown

[Sergienya and Schütze. "Distributional Models and Deep Learning Embeddings: Combining the Best of Both Worlds." International Conference on Learning Representations, 2014.](https://mlanthology.org/iclr/2014/sergienya2014iclr-distributional/)

BibTeX

@inproceedings{sergienya2014iclr-distributional,
  title     = {{Distributional Models and Deep Learning Embeddings: Combining the Best of Both Worlds}},
  author    = {Sergienya, Irina and Schütze, Hinrich},
  booktitle = {International Conference on Learning Representations},
  year      = {2014},
  url       = {https://mlanthology.org/iclr/2014/sergienya2014iclr-distributional/}
}