Multi-Modal Word Synset Induction

Abstract

A word in natural language can be polysemous, having multiple meanings, as well as synonymous, meaning the same thing as other words. Word sense induction attempts to find the senses of polysemous words. Synonymy detection attempts to find when two words are interchangeable. We combine these tasks, first inducing word senses and then detecting similar senses to form word-sense synonym sets (synsets) in an unsupervised fashion. Given pairs of images and text with noun phrase labels, we perform synset induction to produce collections of underlying concepts described by one or more noun phrases. We find that considering multi-modal features from both visual and textual context yields better induced synsets than using either context alone. Human evaluations show that our unsupervised, multi-modally induced synsets are comparable in quality to annotation-assisted ImageNet synsets, achieving about 84% of ImageNet synsets' approval.

Cite

Text

Thomason and Mooney. "Multi-Modal Word Synset Induction." International Joint Conference on Artificial Intelligence, 2017. doi:10.24963/IJCAI.2017/575

Markdown

[Thomason and Mooney. "Multi-Modal Word Synset Induction." International Joint Conference on Artificial Intelligence, 2017.](https://mlanthology.org/ijcai/2017/thomason2017ijcai-multi/) doi:10.24963/IJCAI.2017/575

BibTeX

@inproceedings{thomason2017ijcai-multi,
  title     = {{Multi-Modal Word Synset Induction}},
  author    = {Thomason, Jesse and Mooney, Raymond J.},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2017},
  pages     = {4116-4122},
  doi       = {10.24963/IJCAI.2017/575},
  url       = {https://mlanthology.org/ijcai/2017/thomason2017ijcai-multi/}
}