Learning Conceptual Space Representations of Interrelated Concepts
Abstract
Several recently proposed methods aim to learn conceptual space representations from large text collections. These learned representations associate each object from a given domain of interest with a point in a high-dimensional Euclidean space, but they do not model the concepts from this domain, and can thus not directly be used for categorization and related cognitive tasks. A natural solution is to represent concepts as Gaussians, learned from the representations of their instances, but this can only be reliably done if sufficiently many instances are given, which is often not the case. In this paper, we introduce a Bayesian model which addresses this problem by constructing informative priors from background knowledge about how the concepts of interest are interrelated with each other. We show that this leads to substantially better predictions in a knowledge base completion task.
Cite
Text
Bouraoui and Schockaert. "Learning Conceptual Space Representations of Interrelated Concepts." International Joint Conference on Artificial Intelligence, 2018. doi:10.24963/IJCAI.2018/243Markdown
[Bouraoui and Schockaert. "Learning Conceptual Space Representations of Interrelated Concepts." International Joint Conference on Artificial Intelligence, 2018.](https://mlanthology.org/ijcai/2018/bouraoui2018ijcai-learning/) doi:10.24963/IJCAI.2018/243BibTeX
@inproceedings{bouraoui2018ijcai-learning,
title = {{Learning Conceptual Space Representations of Interrelated Concepts}},
author = {Bouraoui, Zied and Schockaert, Steven},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2018},
pages = {1760-1766},
doi = {10.24963/IJCAI.2018/243},
url = {https://mlanthology.org/ijcai/2018/bouraoui2018ijcai-learning/}
}