A Joint Model of Language and Perception for Grounded Attribute Learning
Abstract
As robots become more ubiquitous and capable, it becomes ever more important for untrained users to easily interact with them. Recently, this has led to study of the language grounding problem, where the goal is to extract representations of the meanings of natural language tied to the physical world. We present an approach for joint learning of language and perception models for grounded attribute induction. The perception model includes classifiers for physical characteristics and a language model based on a probabilistic categorial grammar that enables the construction of compositional meaning representations. We evaluate on the task of interpreting sentences that describe sets of objects in a physical workspace, and demonstrate accurate task performance and effective latent-variable concept induction in physical grounded scenes.
Cite
Text
Matuszek et al. "A Joint Model of Language and Perception for Grounded Attribute Learning." International Conference on Machine Learning, 2012. doi:10.13016/M2EHU6-1M90Markdown
[Matuszek et al. "A Joint Model of Language and Perception for Grounded Attribute Learning." International Conference on Machine Learning, 2012.](https://mlanthology.org/icml/2012/matuszek2012icml-joint/) doi:10.13016/M2EHU6-1M90BibTeX
@inproceedings{matuszek2012icml-joint,
title = {{A Joint Model of Language and Perception for Grounded Attribute Learning}},
author = {Matuszek, Cynthia and FitzGerald, Nicholas and Zettlemoyer, Luke and Bo, Liefeng and Fox, Dieter},
booktitle = {International Conference on Machine Learning},
year = {2012},
doi = {10.13016/M2EHU6-1M90},
url = {https://mlanthology.org/icml/2012/matuszek2012icml-joint/}
}