Deep Learning with Relational Logic Representations
Abstract
Despite their significant success, all the existing deep neural architectures based on static computational graphs processing fixed tensor representations necessarily face fundamental limitations when presented with dynamically sized and structured data. Examples of these are sparse multi-relational structures present everywhere from biological networks and complex knowledge hyper-graphs to logical theories. Likewise, given the cryptic nature of generalization and representation learning in neural networks, potential integration with the sheer amounts of existing symbolic abstractions present in human knowledge remains highly problematic. Here, we argue that these abilities, naturally present in symbolic approaches based on the expressive power of relational logic, are necessary to be adopted for further progress of neural networks, and present a well founded learning framework for integration of deep and symbolic approaches based on the lifted modelling paradigm.
Cite
Text
Sourek. "Deep Learning with Relational Logic Representations." International Joint Conference on Artificial Intelligence, 2019. doi:10.24963/IJCAI.2019/920Markdown
[Sourek. "Deep Learning with Relational Logic Representations." International Joint Conference on Artificial Intelligence, 2019.](https://mlanthology.org/ijcai/2019/sourek2019ijcai-deep/) doi:10.24963/IJCAI.2019/920BibTeX
@inproceedings{sourek2019ijcai-deep,
title = {{Deep Learning with Relational Logic Representations}},
author = {Sourek, Gustav},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2019},
pages = {6462-6463},
doi = {10.24963/IJCAI.2019/920},
url = {https://mlanthology.org/ijcai/2019/sourek2019ijcai-deep/}
}