Approximating Word Ranking and Negative Sampling for Word Embedding
Abstract
CBOW (Continuous Bag-Of-Words) is one of the most commonly used techniques to generate word embeddings in various NLP tasks. However, it fails to reach the optimal performance due to uniform involvements of positive words and a simple sampling distribution of negative words. To resolve these issues, we propose OptRank to optimize word ranking and approximate negative sampling for bettering word embedding. Specifically, we first formalize word embedding as a ranking problem. Then, we weigh the positive words by their ranks such that highly ranked words have more importance, and adopt a dynamic sampling strategy to select informative negative words. In addition, an approximation method is designed to efficiently compute word ranks. Empirical experiments show that OptRank consistently outperforms its counterparts on a benchmark dataset with different sampling scales, especially when the sampled subset is small. The code and datasets can be obtained from https://github.com/ouououououou/OptRank.
Cite
Text
Guo et al. "Approximating Word Ranking and Negative Sampling for Word Embedding." International Joint Conference on Artificial Intelligence, 2018. doi:10.24963/IJCAI.2018/569Markdown
[Guo et al. "Approximating Word Ranking and Negative Sampling for Word Embedding." International Joint Conference on Artificial Intelligence, 2018.](https://mlanthology.org/ijcai/2018/guo2018ijcai-approximating/) doi:10.24963/IJCAI.2018/569BibTeX
@inproceedings{guo2018ijcai-approximating,
title = {{Approximating Word Ranking and Negative Sampling for Word Embedding}},
author = {Guo, Guibing and Ouyang, Shichang and Yuan, Fajie and Wang, Xingwei},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2018},
pages = {4092-4098},
doi = {10.24963/IJCAI.2018/569},
url = {https://mlanthology.org/ijcai/2018/guo2018ijcai-approximating/}
}