Joint Learning of Words and Meaning Representations for Open-Text Semantic Parsing
Abstract
Open-text semantic parsers are designed to interpret any statement in natural language by inferring a corresponding meaning representation (MR - a formal representation of its sense). Unfortunately, large scale systems cannot be easily machine-learned due to lack of directly supervised data. We propose a method that learns to assign MRs to a wide range of text (using a dictionary of more than 70,000 words mapped to more than 40,000 entities) thanks to a training scheme that combines learning from knowledge bases (e.g. WordNet) with learning from raw text. The model jointly learns representations of words, entities and MRs via a multi-task training process operating on these diverse sources of data. Hence, the system ends up providing methods for knowledge acquisition and word-sense disambiguation within the context of semantic parsing in a single elegant framework. Experiments on these various tasks indicate the promise of the approach.
Cite
Text
Bordes et al. "Joint Learning of Words and Meaning Representations for Open-Text Semantic Parsing." Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics, 2012.Markdown
[Bordes et al. "Joint Learning of Words and Meaning Representations for Open-Text Semantic Parsing." Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics, 2012.](https://mlanthology.org/aistats/2012/bordes2012aistats-joint/)BibTeX
@inproceedings{bordes2012aistats-joint,
title = {{Joint Learning of Words and Meaning Representations for Open-Text Semantic Parsing}},
author = {Bordes, Antoine and Glorot, Xavier and Weston, Jason and Bengio, Yoshua},
booktitle = {Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics},
year = {2012},
pages = {127-135},
volume = {22},
url = {https://mlanthology.org/aistats/2012/bordes2012aistats-joint/}
}