Deep Neural Networks via Complex Network Theory: A Perspective
Abstract
Answer Set Programming (ASP) and Large Language Models (LLMs) have emerged as powerful tools in Artificial Intelligence, each offering unique capabilities in knowledge representation and natural language processing, respectively. In this paper, we combine the strengths of the two paradigms with the aim of improving the structured representation of complex knowledge encoded in natural language. In a nutshell, the structured representation is obtained by combining syntactic structures extracted by LLMs and semantic aspects encoded in the knowledge base. The interaction between ASP and LLMs is driven by a YAML file specifying prompt templates and domain-specific background knowledge. The proposed approach is evaluated using a set of benchmarks based on a dataset obtained from problems of ASP Competitions. The results of our experiment show that ASP can sensibly improve the F1-score, especially when relatively small models are used.
Cite
Text
La Malfa et al. "Deep Neural Networks via Complex Network Theory: A Perspective." International Joint Conference on Artificial Intelligence, 2024. doi:10.24963/ijcai.2024/482Markdown
[La Malfa et al. "Deep Neural Networks via Complex Network Theory: A Perspective." International Joint Conference on Artificial Intelligence, 2024.](https://mlanthology.org/ijcai/2024/malfa2024ijcai-deep/) doi:10.24963/ijcai.2024/482BibTeX
@inproceedings{malfa2024ijcai-deep,
title = {{Deep Neural Networks via Complex Network Theory: A Perspective}},
author = {La Malfa, Emanuele and La Malfa, Gabriele and Nicosia, Giuseppe and Latora, Vito},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2024},
pages = {4361-4369},
doi = {10.24963/ijcai.2024/482},
url = {https://mlanthology.org/ijcai/2024/malfa2024ijcai-deep/}
}