The Curse of CoT: On the Limitations of Chain-of-Thought in In-Context Learning

Abstract

Chain-of-Thought (CoT) prompting has been widely recognized for its ability to enhance reasoning capabilities in large language models (LLMs). However, our study reveals a surprising contradiction to this prevailing perspective within the fundamental domain of pattern-based in-context learning (ICL). Through extensive experiments involving 16 state-of-the-art LLMs and nine diverse pattern-based ICL datasets, we demonstrate that CoT and its reasoning variants consistently underperform direct answering across varying model scales and benchmark complexities. To systematically investigate this unexpected phenomenon, we designed extensive experiments to validate several hypothetical explanations. Our analysis uncovers a fundamental hybrid mechanism of explicit-implicit reasoning driving CoT’s performance in pattern-based ICL: while explicit reasoning falters due to LLMs’ struggles to infer underlying patterns from demonstrations, implicit reasoning—disrupted by the increased contextual distance of CoT rationales—often compensates, delivering correct answers despite flawed rationales. This hybrid mechanism explains CoT’s relative underperformance, as noise from weak explicit inference undermines the process, even as implicit mechanisms partially salvage outcomes. Notably, even long-CoT reasoning models, which excel in abstract and symbolic reasoning, fail to fully overcome these limitations despite higher computational costs. Our findings challenge existing assumptions regarding the universal efficacy of CoT, yielding novel insights into its limitations and guiding future research toward more nuanced and effective reasoning methodologies for LLMs.

Cite

Text

Zheng et al. "The Curse of CoT: On the Limitations of Chain-of-Thought in In-Context Learning." Transactions on Machine Learning Research, 2025.

Markdown

[Zheng et al. "The Curse of CoT: On the Limitations of Chain-of-Thought in In-Context Learning." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/zheng2025tmlr-curse/)

BibTeX

@article{zheng2025tmlr-curse,
  title     = {{The Curse of CoT: On the Limitations of Chain-of-Thought in In-Context Learning}},
  author    = {Zheng, Tianshi and Chen, Yixiang and Li, Chengxi and Li, Chunyang and Zong, Qing and Shi, Haochen and Xu, Baixuan and Song, Yangqiu and Wong, Ginny and See, Simon},
  journal   = {Transactions on Machine Learning Research},
  year      = {2025},
  url       = {https://mlanthology.org/tmlr/2025/zheng2025tmlr-curse/}
}