An Evaluation of Approaches to Train Embeddings for Logical Inference (Student Abstract)

Abstract

Knowledge bases traditionally require manual optimization to ensure reasonable performance when answering queries. We build on previous neurosymbolic approaches by improving the training of an embedding model for logical statements that maximizes similarity between unifying atoms and minimizes similarity of non-unifying atoms. In particular, we evaluate different approaches to training this model.

Cite

Text

White et al. "An Evaluation of Approaches to Train Embeddings for Logical Inference (Student Abstract)." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I28.35313

Markdown

[White et al. "An Evaluation of Approaches to Train Embeddings for Logical Inference (Student Abstract)." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/white2025aaai-evaluation/) doi:10.1609/AAAI.V39I28.35313

BibTeX

@inproceedings{white2025aaai-evaluation,
  title     = {{An Evaluation of Approaches to Train Embeddings for Logical Inference (Student Abstract)}},
  author    = {White, Yasir and Lipsey, Jevon and Heflin, Jeff},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2025},
  pages     = {29527-29528},
  doi       = {10.1609/AAAI.V39I28.35313},
  url       = {https://mlanthology.org/aaai/2025/white2025aaai-evaluation/}
}