Improving Information Extraction from Images with Learned Semantic Models
Abstract
Many applications require an understanding of an image that goes beyond the simple detection and classification of its objects. In particular, a great deal of semantic information is carried in the relationships between objects. We have previously shown, that the combination of a visual model and a statistical semantic prior model can improve on the task of mapping images to their associated scene description. In this paper, we review the model and compare it to a novel conditional multi-way model for visual relationship detection, which does not include an explicitly trained visual prior model. We also discuss potential relationships between the proposed methods and memory models of the human brain.
Cite
Text
Baier et al. "Improving Information Extraction from Images with Learned Semantic Models." International Joint Conference on Artificial Intelligence, 2018. doi:10.24963/IJCAI.2018/724Markdown
[Baier et al. "Improving Information Extraction from Images with Learned Semantic Models." International Joint Conference on Artificial Intelligence, 2018.](https://mlanthology.org/ijcai/2018/baier2018ijcai-improving/) doi:10.24963/IJCAI.2018/724BibTeX
@inproceedings{baier2018ijcai-improving,
title = {{Improving Information Extraction from Images with Learned Semantic Models}},
author = {Baier, Stephan and Ma, Yunpu and Tresp, Volker},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2018},
pages = {5214-5218},
doi = {10.24963/IJCAI.2018/724},
url = {https://mlanthology.org/ijcai/2018/baier2018ijcai-improving/}
}