LogicMP: A Neuro-Symbolic Approach for Encoding First-Order Logic Constraints

Abstract

Integrating first-order logic constraints (FOLCs) with neural networks is a crucial but challenging problem since it involves modeling intricate correlations to satisfy the constraints. This paper proposes a novel neural layer, LogicMP, which performs mean-field variational inference over a Markov Logic Network (MLN). It can be plugged into any off-the-shelf neural network to encode FOLCs while retaining modularity and efficiency. By exploiting the structure and symmetries in MLNs, we theoretically demonstrate that our well-designed, efficient mean-field iterations greatly mitigate the difficulty of MLN inference, reducing the inference from sequential calculation to a series of parallel tensor operations. Empirical results in three kinds of tasks over images, graphs, and text show that LogicMP outperforms advanced competitors in both performance and efficiency.

Cite

Text

Xu et al. "LogicMP: A Neuro-Symbolic Approach for Encoding First-Order Logic Constraints." International Conference on Learning Representations, 2024.

Markdown

[Xu et al. "LogicMP: A Neuro-Symbolic Approach for Encoding First-Order Logic Constraints." International Conference on Learning Representations, 2024.](https://mlanthology.org/iclr/2024/xu2024iclr-logicmp/)

BibTeX

@inproceedings{xu2024iclr-logicmp,
  title     = {{LogicMP: A Neuro-Symbolic Approach for Encoding First-Order Logic Constraints}},
  author    = {Xu, Weidi and Wang, Jingwei and Xie, Lele and He, Jianshan and Zhou, Hongting and Wang, Taifeng and Wan, Xiaopei and Chen, Jingdong and Qu, Chao and Chu, Wei},
  booktitle = {International Conference on Learning Representations},
  year      = {2024},
  url       = {https://mlanthology.org/iclr/2024/xu2024iclr-logicmp/}
}