Recursive Decomposition of Logical Thoughts: Framework for Superior Reasoning and Knowledge Propagation in Large Language Models
Abstract
Background: Large Language Models often struggle with multi-step reasoning due to cascading errors, rigid prompt structures, and underutilized intermediate reasoning steps. While prompting strategies such as Chain-of-Thought , CoT with Self-Consistency, and Least-to-Most offer partial improvements, they typically lack mechanisms for feedback-driven learning or structured reuse of prior thought sequences. Objectives: This work introduces Recursive Decomposition of Logical Thoughts (RDoLT), a cognitively inspired prompting framework that enhances LLM reasoning through hierarchical decomposition, multi-feature scoring, and knowledge propagation. The framework aims to overcome linear reasoning limitations by enabling structured, memory-aware exploration of thought spaces. Methods: RDoLT executes a three-stage iterative reasoning process across Easy, Intermediate, and Final tiers. At each level, multiple candidate thoughts are generated and scored on Logical Validity, Coherence, Simplicity, and Adaptiveness. The Knowledge Propagation Module (KPM) persistently tracks both selected and rejected thoughts, allowing future reasoning stages to reuse contextually relevant but previously discarded knowledge. The framework supports adaptive thresholding, controlled reasoning depth, and edge-case regeneration through structured feedback loops. Results: Extensive evaluation across five reasoning benchmarks demonstrates that RDoLT outperforms the most competitive prompting strategies in both accuracy and stability. On GSM8K, RDoLT achieves 90.98% accuracy with ChatGPT-4o, surpassing CoT-SC (89.4%) and ReAct (90.5%). It improves Gemma 2 (27B) performance on SVAMP from 69.86% (Vanilla) to 75.27%, and on MultiArith from 67.96% (Vanilla) to 72.49%. Across all benchmarks, RDoLT outperforms or matches the strongest baseline in over 60% of settings, highlighting its robustness across diverse reasoning tasks and model scales. Ablation studies reveal that generating three thoughts per stage yields the best trade-off between performance and efficiency, while the Knowledge Propagation Module (KPM) consistently reduces reasoning variance by leveraging both accepted and discarded thoughts across stages. Conclusions: RDoLT presents a scalable reasoning paradigm grounded in cognitive principles. Its integration of hierarchical decomposition, structured scoring, and selective memory propagation enables more reliable and adaptive reasoning in LLMs. These results establish RDoLT as a robust prompt engineering framework with broad applicability, and future work will focus on optimizing token efficiency and extending to domain-specific use cases.
Cite
Text
Qasim et al. "Recursive Decomposition of Logical Thoughts: Framework for Superior Reasoning and Knowledge Propagation in Large Language Models." Journal of Artificial Intelligence Research, 2025. doi:10.1613/JAIR.1.18562Markdown
[Qasim et al. "Recursive Decomposition of Logical Thoughts: Framework for Superior Reasoning and Knowledge Propagation in Large Language Models." Journal of Artificial Intelligence Research, 2025.](https://mlanthology.org/jair/2025/qasim2025jair-recursive/) doi:10.1613/JAIR.1.18562BibTeX
@article{qasim2025jair-recursive,
title = {{Recursive Decomposition of Logical Thoughts: Framework for Superior Reasoning and Knowledge Propagation in Large Language Models}},
author = {Qasim, Kaleem Ullah and Zhang, Jiashu and Alsahfi, Tariq and Butt, Ateeq Ur Rehman},
journal = {Journal of Artificial Intelligence Research},
year = {2025},
doi = {10.1613/JAIR.1.18562},
volume = {83},
url = {https://mlanthology.org/jair/2025/qasim2025jair-recursive/}
}