Learning Word Vectors Efficiently Using Shared Representations and Document Representations
Abstract
We propose some better word embedding models based on vLBL model and ivLBL model by sharing representations between context and target words and using document representations. Our proposed models are much simpler which have almost half less parameters than the state-of-the-art methods. We achieve better results on word analogy task than the best ones reported before using significantly less training data and computing time.
Cite
Text
Luo and Xu. "Learning Word Vectors Efficiently Using Shared Representations and Document Representations." AAAI Conference on Artificial Intelligence, 2015. doi:10.1609/AAAI.V29I1.9711Markdown
[Luo and Xu. "Learning Word Vectors Efficiently Using Shared Representations and Document Representations." AAAI Conference on Artificial Intelligence, 2015.](https://mlanthology.org/aaai/2015/luo2015aaai-learning/) doi:10.1609/AAAI.V29I1.9711BibTeX
@inproceedings{luo2015aaai-learning,
title = {{Learning Word Vectors Efficiently Using Shared Representations and Document Representations}},
author = {Luo, Qun and Xu, Weiran},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2015},
pages = {4180-4181},
doi = {10.1609/AAAI.V29I1.9711},
url = {https://mlanthology.org/aaai/2015/luo2015aaai-learning/}
}