Convex Co-Embedding

Abstract

We present a general framework for association learning, where entities are embedded in a common latent space to express relatedness by geometry -- an approach that underlies the state of the art for link prediction, relation learning, multi-label tagging, relevance retrieval and ranking. Although current approaches rely on local training applied to non-convex formulations, we demonstrate how general convex formulations can be achieved for entity embedding, both for standard multi-linear and prototype-distance models. We investigate an efficient optimization strategy that allows scaling. An experimental evaluation reveals the advantages of global training in different case studies.

Cite

Text

Mirzazadeh et al. "Convex Co-Embedding." AAAI Conference on Artificial Intelligence, 2014. doi:10.1609/AAAI.V28I1.8976

Markdown

[Mirzazadeh et al. "Convex Co-Embedding." AAAI Conference on Artificial Intelligence, 2014.](https://mlanthology.org/aaai/2014/mirzazadeh2014aaai-convex/) doi:10.1609/AAAI.V28I1.8976

BibTeX

@inproceedings{mirzazadeh2014aaai-convex,
  title     = {{Convex Co-Embedding}},
  author    = {Mirzazadeh, Farzaneh and Guo, Yuhong and Schuurmans, Dale},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2014},
  pages     = {1989-1996},
  doi       = {10.1609/AAAI.V28I1.8976},
  url       = {https://mlanthology.org/aaai/2014/mirzazadeh2014aaai-convex/}
}