Invariance and Identifiability Issues for Word Embeddings

Abstract

Word embeddings are commonly obtained as optimisers of a criterion function f of a text corpus, but assessed on word-task performance using a different evaluation function g of the test data. We contend that a possible source of disparity in performance on tasks is the incompatibility between classes of transformations that leave f and g invariant. In particular, word embeddings defined by f are not unique; they are defined only up to a class of transformations to which f is invariant, and this class is larger than the class to which g is invariant. One implication of this is that the apparent superiority of one word embedding over another, as measured by word task performance, may largely be a consequence of the arbitrary elements selected from the respective solution sets. We provide a formal treatment of the above identifiability issue, present some numerical examples, and discuss possible resolutions.

Cite

Text

Carrington et al. "Invariance and Identifiability Issues for Word Embeddings." Neural Information Processing Systems, 2019.

Markdown

[Carrington et al. "Invariance and Identifiability Issues for Word Embeddings." Neural Information Processing Systems, 2019.](https://mlanthology.org/neurips/2019/carrington2019neurips-invariance/)

BibTeX

@inproceedings{carrington2019neurips-invariance,
  title     = {{Invariance and Identifiability Issues for Word Embeddings}},
  author    = {Carrington, Rachel and Bharath, Karthik and Preston, Simon},
  booktitle = {Neural Information Processing Systems},
  year      = {2019},
  pages     = {15140-15149},
  url       = {https://mlanthology.org/neurips/2019/carrington2019neurips-invariance/}
}