Enhancing the Logical Reasoning Abilities of Large Language Models

Abstract

Large language models (LLMs) have demonstrated impressive progress in various natural language progress tasks. However, it has been observed that LLMs still struggle with complex causal and logical reasoning. To facilitate this research direction, we first proposed a training method to distinguish causal relationships from spurious correlations in sentiment classification tasks. Then we conducted a comprehensive survey categorizing existing approaches, firstly identifying the main challenges of complex logical question-answering tasks and logical inconsistency across different questions. Our ongoing projects mainly focus on two points: (1) incorporating modal and epistemic logic to evaluate and enhance LLMs’ reasoning ability to handle more complex and diverse reasoning tasks, and (2) phased training LLMs with curriculum learning to improve their logical reasoning performance.

Cite

Text

Cheng. "Enhancing the Logical Reasoning Abilities of Large Language Models." International Joint Conference on Artificial Intelligence, 2025. doi:10.24963/IJCAI.2025/1239

Markdown

[Cheng. "Enhancing the Logical Reasoning Abilities of Large Language Models." International Joint Conference on Artificial Intelligence, 2025.](https://mlanthology.org/ijcai/2025/cheng2025ijcai-enhancing/) doi:10.24963/IJCAI.2025/1239

BibTeX

@inproceedings{cheng2025ijcai-enhancing,
  title     = {{Enhancing the Logical Reasoning Abilities of Large Language Models}},
  author    = {Cheng, Fengxiang},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2025},
  pages     = {10969-10970},
  doi       = {10.24963/IJCAI.2025/1239},
  url       = {https://mlanthology.org/ijcai/2025/cheng2025ijcai-enhancing/}
}