Soft-Unification in Deep Probabilistic Logic
Abstract
A fundamental challenge in neuro-symbolic AI is to devise primitives that fuse the logical and neural concepts. The Neural Theorem Prover has proposed the notion of soft-unification to turn the symbolic comparison between terms (i.e. unification) into a comparison in embedding space. It has been shown that soft-unification is a powerful mechanism that can be used to learn logic rules in an end-to-end differentiable manner. We study soft-unification from a conceptual point and outline several desirable properties of this operation. These include non-redundancy in the proof, well-defined proof scores, and non-sparse gradients. Unfortunately, these properties are not satisfied by previous systems such as the Neural Theorem Prover. Therefore, we introduce a more principled framework called DeepSoftLog based on probabilistic rather than fuzzy semantics. Our experiments demonstrate that DeepSoftLog can outperform the state-of-the-art on neuro-symbolic benchmarks, highlighting the benefits of these properties.
Cite
Text
Maene and De Raedt. "Soft-Unification in Deep Probabilistic Logic." Neural Information Processing Systems, 2023.Markdown
[Maene and De Raedt. "Soft-Unification in Deep Probabilistic Logic." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/maene2023neurips-softunification/)BibTeX
@inproceedings{maene2023neurips-softunification,
title = {{Soft-Unification in Deep Probabilistic Logic}},
author = {Maene, Jaron and De Raedt, Luc},
booktitle = {Neural Information Processing Systems},
year = {2023},
url = {https://mlanthology.org/neurips/2023/maene2023neurips-softunification/}
}