Aligning with Logic: Measuring, Evaluating and Improving Logical Preference Consistency in Large Language Models

Abstract

Large Language Models (LLMs) are expected to be predictable and trustworthy to support reliable decision-making systems. Yet current LLMs often show inconsistencies in their judgments. In this work, we examine logical preference consistency as a foundational requirement for building more dependable LLM systems, ensuring stable and coherent decision-making while minimizing erratic or contradictory outputs. To quantify the logical preference consistency, we propose a universal evaluation framework based on three fundamental properties: transitivity, commutativity and negation invariance. Through extensive experimentation across diverse LLMs, we demonstrate that these properties serve as strong indicators of judgment robustness. Furthermore, we introduce a data refinement and augmentation technique, REPAIR, that enhances logical consistency while maintaining alignment with human preferences. Finally, we show that improving consistency leads to better performance in LLM-driven logic-based algorithms, reinforcing stability and coherence in decision-making systems.

Cite

Text

Liu et al. "Aligning with Logic: Measuring, Evaluating and Improving Logical Preference Consistency in Large Language Models." Proceedings of the 42nd International Conference on Machine Learning, 2025.

Markdown

[Liu et al. "Aligning with Logic: Measuring, Evaluating and Improving Logical Preference Consistency in Large Language Models." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/liu2025icml-aligning/)

BibTeX

@inproceedings{liu2025icml-aligning,
  title     = {{Aligning with Logic: Measuring, Evaluating and Improving Logical Preference Consistency in Large Language Models}},
  author    = {Liu, Yinhong and Guo, Zhijiang and Liang, Tianya and Shareghi, Ehsan and Vulić, Ivan and Collier, Nigel},
  booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
  year      = {2025},
  pages     = {38518-38539},
  volume    = {267},
  url       = {https://mlanthology.org/icml/2025/liu2025icml-aligning/}
}