Unsupervised Measure of Word Similarity: How to Outperform Co-Occurrence and Vector Cosine in VSMs

Abstract

In this paper, we claim that vector cosine – which is generally considered among the most efficient unsupervised measures for identifying word similarity in Vector Space Models – can be outperformed by an unsupervised measure that calculates the extent of the intersection among the most mutually dependent contexts of the target words. To prove it, we describe and evaluate APSyn, a variant of the Average Precision that, without any optimization, outperforms the vector cosine and the co-occurrence on the standard ESL test set, with an improvement ranging between +9.00% and +17.98%, depending on the number of chosen top contexts.

Cite

Text

Santus et al. "Unsupervised Measure of Word Similarity: How to Outperform Co-Occurrence and Vector Cosine in VSMs." AAAI Conference on Artificial Intelligence, 2016. doi:10.1609/AAAI.V30I1.9932

Markdown

[Santus et al. "Unsupervised Measure of Word Similarity: How to Outperform Co-Occurrence and Vector Cosine in VSMs." AAAI Conference on Artificial Intelligence, 2016.](https://mlanthology.org/aaai/2016/santus2016aaai-unsupervised/) doi:10.1609/AAAI.V30I1.9932

BibTeX

@inproceedings{santus2016aaai-unsupervised,
  title     = {{Unsupervised Measure of Word Similarity: How to Outperform Co-Occurrence and Vector Cosine in VSMs}},
  author    = {Santus, Enrico and Lenci, Alessandro and Chiu, Tin-Shing and Lu, Qin and Huang, Chu-Ren},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2016},
  pages     = {4260-4261},
  doi       = {10.1609/AAAI.V30I1.9932},
  url       = {https://mlanthology.org/aaai/2016/santus2016aaai-unsupervised/}
}