Teaching LTLf Satisfiability Checking to Neural Networks
Abstract
Linear temporal logic over finite traces (LTLf) satisfiability checking is a fundamental and hard (PSPACE-complete) problem in the artificial intelligence community. We explore teaching end-to-end neural networks to check satisfiability in polynomial time. It is a challenge to characterize the syntactic and semantic features of LTLf via neural networks. To tackle this challenge, we propose LTLfNet, a recursive neural network that captures syntactic features of LTLf by recursively combining the embeddings of sub-formulae. LTLfNet models permutation invariance and sequentiality in the semantics of LTLf through different aggregation mechanisms of sub-formulae. Experimental results demonstrate that LTLfNet achieves good performance in synthetic datasets and generalizes across large-scale datasets. They also show that LTLfNet is competitive with state-of-the-art symbolic approaches such as nuXmv and CDLSC.
Cite
Text
Luo et al. "Teaching LTLf Satisfiability Checking to Neural Networks." International Joint Conference on Artificial Intelligence, 2022. doi:10.24963/IJCAI.2022/457Markdown
[Luo et al. "Teaching LTLf Satisfiability Checking to Neural Networks." International Joint Conference on Artificial Intelligence, 2022.](https://mlanthology.org/ijcai/2022/luo2022ijcai-teaching/) doi:10.24963/IJCAI.2022/457BibTeX
@inproceedings{luo2022ijcai-teaching,
title = {{Teaching LTLf Satisfiability Checking to Neural Networks}},
author = {Luo, Weilin and Wan, Hai and Du, Jianfeng and Li, Xiaoda and Fu, Yuze and Ye, Rongzhen and Zhang, Delong},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2022},
pages = {3292-3298},
doi = {10.24963/IJCAI.2022/457},
url = {https://mlanthology.org/ijcai/2022/luo2022ijcai-teaching/}
}