Graph Neural Networks Meet Neural-Symbolic Computing: A Survey and Perspective
Abstract
We review PLTLf and PLDLf, the pure-past versions of the well-known logics on finite traces LTLf and LDLf, respectively. PLTLf and PLDLf are logics about the past, and so scan the trace backwards from the end towards the beginning. Because of this, we can exploit a foundational result on reverse languages to get an exponential improvement, over LTLf /LDLf , for computing the corresponding DFA. This exponential improvement is reflected in several forms of sequential decision making involving temporal specifications, such as planning and decision problems in non-deterministic and non-Markovian domains. Interestingly, PLTLf (resp., PLDLf ) has the same expressive power as LTLf (resp., LDLf ), but transforming a PLTLf (resp., PLDLf ) formula into its equivalent LTLf (resp.,LDLf) is quite expensive. Hence, to take advantage of the exponential improvement, properties of interest must be directly expressed in PLTLf /PLDLf .
Cite
Text
Lamb et al. "Graph Neural Networks Meet Neural-Symbolic Computing: A Survey and Perspective." International Joint Conference on Artificial Intelligence, 2020. doi:10.24963/IJCAI.2020/679Markdown
[Lamb et al. "Graph Neural Networks Meet Neural-Symbolic Computing: A Survey and Perspective." International Joint Conference on Artificial Intelligence, 2020.](https://mlanthology.org/ijcai/2020/lamb2020ijcai-graph/) doi:10.24963/IJCAI.2020/679BibTeX
@inproceedings{lamb2020ijcai-graph,
title = {{Graph Neural Networks Meet Neural-Symbolic Computing: A Survey and Perspective}},
author = {Lamb, Luís C. and Garcez, Artur S. d'Avila and Gori, Marco and Prates, Marcelo O. R. and Avelar, Pedro H. C. and Vardi, Moshe Y.},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2020},
pages = {4877-4884},
doi = {10.24963/IJCAI.2020/679},
url = {https://mlanthology.org/ijcai/2020/lamb2020ijcai-graph/}
}