Towards Logically Consistent Language Models via Probabilistic Reasoning

Abstract

Large language models (LLMs) are a promising venue for natural language understanding and generation tasks. However, current LLMs are far from reliable: they are prone to generate non-factual information and, more crucially, to contradict themselves when prompted to reason about beliefs of the world. These problems are currently addressed with large scale fine-tuning or by delegating consistent reasoning to external tools. In this work, we strive for a middle ground and introduce a training objective based on principled probabilistic reasoning that teaches a LLM to be consistent with external knowledge in the form of a set of facts and rules. Fine-tuning with our loss on a limited set of facts enables our LLMs to be more logically consistent than previous baselines and allows them to extrapolate to unseen but semantically similar factual knowledge more systematically.

Cite

Text

Calanzone et al. "Towards Logically Consistent Language Models via Probabilistic Reasoning." ICLR 2024 Workshops: R2-FM, 2024.

Markdown

[Calanzone et al. "Towards Logically Consistent Language Models via Probabilistic Reasoning." ICLR 2024 Workshops: R2-FM, 2024.](https://mlanthology.org/iclrw/2024/calanzone2024iclrw-logically/)

BibTeX

@inproceedings{calanzone2024iclrw-logically,
  title     = {{Towards Logically Consistent Language Models via Probabilistic Reasoning}},
  author    = {Calanzone, Diego and Vergari, Antonio and Teso, Stefano},
  booktitle = {ICLR 2024 Workshops: R2-FM},
  year      = {2024},
  url       = {https://mlanthology.org/iclrw/2024/calanzone2024iclrw-logically/}
}