Learning Language Models from Images with ReGLL
Abstract
In this demonstration, we present ReGLL, a system that is able to learn language models taking into account the perceptual context in which the sentences of the model are produced. Thus, ReGLL learns from pairs (Context, Sentence) where: Context is given in the form of an image whose objects have been identified, and Sentence gives a (partial) description of the image. ReGLL uses Inductive Logic Programming Techniques and learns some mappings between n-grams and first order representations of their meanings. The demonstration shows some applications of the language models learned, such as generating relevant sentences describing new images given by the user and translating some sentences from one language to another without the need of any parallel corpus.
Cite
Text
Becerra-Bonache et al. "Learning Language Models from Images with ReGLL." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2016. doi:10.1007/978-3-319-46131-1_12Markdown
[Becerra-Bonache et al. "Learning Language Models from Images with ReGLL." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2016.](https://mlanthology.org/ecmlpkdd/2016/becerrabonache2016ecmlpkdd-learning/) doi:10.1007/978-3-319-46131-1_12BibTeX
@inproceedings{becerrabonache2016ecmlpkdd-learning,
title = {{Learning Language Models from Images with ReGLL}},
author = {Becerra-Bonache, Leonor and Blockeel, Hendrik and Galván, María and Jacquenet, François},
booktitle = {European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases},
year = {2016},
pages = {55-58},
doi = {10.1007/978-3-319-46131-1_12},
url = {https://mlanthology.org/ecmlpkdd/2016/becerrabonache2016ecmlpkdd-learning/}
}