Context Selection for Embedding Models

Abstract

Word embeddings are an effective tool to analyze language. They have been recently extended to model other types of data beyond text, such as items in recommendation systems. Embedding models consider the probability of a target observation (a word or an item) conditioned on the elements in the context (other words or items). In this paper, we show that conditioning on all the elements in the context is not optimal. Instead, we model the probability of the target conditioned on a learned subset of the elements in the context. We use amortized variational inference to automatically choose this subset. Compared to standard embedding models, this method improves predictions and the quality of the embeddings.

Cite

Text

Liu et al. "Context Selection for Embedding Models." Neural Information Processing Systems, 2017.

Markdown

[Liu et al. "Context Selection for Embedding Models." Neural Information Processing Systems, 2017.](https://mlanthology.org/neurips/2017/liu2017neurips-context/)

BibTeX

@inproceedings{liu2017neurips-context,
  title     = {{Context Selection for Embedding Models}},
  author    = {Liu, Liping and Ruiz, Francisco and Athey, Susan and Blei, David},
  booktitle = {Neural Information Processing Systems},
  year      = {2017},
  pages     = {4816-4825},
  url       = {https://mlanthology.org/neurips/2017/liu2017neurips-context/}
}