Getting in Shape: Word Embedding SubSpaces

Abstract

Many tasks in natural language processing require the alignment of word embeddings. Embedding alignment relies on the geometric properties of the manifold of word vectors. This paper focuses on supervised linear alignment and studies the relationship between the shape of the target embedding. We assess the performance of aligned word vectors on semantic similarity tasks and find that the isotropy of the target embedding is critical to the alignment. Furthermore, aligning with an isotropic noise can deliver satisfactory results. We provide a theoretical framework and guarantees which aid in the understanding of empirical results.

Cite

Text

Zhou et al. "Getting in Shape: Word Embedding SubSpaces." International Joint Conference on Artificial Intelligence, 2019. doi:10.24963/IJCAI.2019/761

Markdown

[Zhou et al. "Getting in Shape: Word Embedding SubSpaces." International Joint Conference on Artificial Intelligence, 2019.](https://mlanthology.org/ijcai/2019/zhou2019ijcai-getting/) doi:10.24963/IJCAI.2019/761

BibTeX

@inproceedings{zhou2019ijcai-getting,
  title     = {{Getting in Shape: Word Embedding SubSpaces}},
  author    = {Zhou, Tianyuan and Sedoc, João and Rodu, Jordan},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2019},
  pages     = {5478-5484},
  doi       = {10.24963/IJCAI.2019/761},
  url       = {https://mlanthology.org/ijcai/2019/zhou2019ijcai-getting/}
}