The Latent Relation Mapping Engine: Algorithm and Experiments

Abstract

Many AI researchers and cognitive scientists have argued that analogy is the core of cognition. The most influential work on computational modeling of analogy-making is Structure Mapping Theory (SMT) and its implementation in the Structure Mapping Engine (SME). A limitation of SME is the requirement for complex hand-coded representations. We introduce the Latent Relation Mapping Engine (LRME), which combines ideas from SME and Latent Relational Analysis (LRA) in order to remove the requirement for hand-coded representations. LRME builds analogical mappings between lists of words, using a large corpus of raw text to automatically discover the semantic relations among the words. We evaluate LRME on a set of twenty analogical mapping problems, ten based on scientific analogies and ten based on common metaphors. LRME achieves human-level performance on the twenty problems. We compare LRME with a variety of alternative approaches and find that they are not able to reach the same level of performance.

Cite

Text

Turney. "The Latent Relation Mapping Engine: Algorithm and Experiments." Journal of Artificial Intelligence Research, 2008. doi:10.1613/JAIR.2693

Markdown

[Turney. "The Latent Relation Mapping Engine: Algorithm and Experiments." Journal of Artificial Intelligence Research, 2008.](https://mlanthology.org/jair/2008/turney2008jair-latent/) doi:10.1613/JAIR.2693

BibTeX

@article{turney2008jair-latent,
  title     = {{The Latent Relation Mapping Engine: Algorithm and Experiments}},
  author    = {Turney, Peter D.},
  journal   = {Journal of Artificial Intelligence Research},
  year      = {2008},
  pages     = {615-655},
  doi       = {10.1613/JAIR.2693},
  volume    = {33},
  url       = {https://mlanthology.org/jair/2008/turney2008jair-latent/}
}