Do NOT Think That Much for 2+3=? on the Overthinking of Long Reasoning Models
Abstract
The remarkable performance of long reasoning models can be attributed to their ability to emulate human-like long-time thinking during inference. These models employ extended chain-of-thought (CoT) processes, exploring multiple strategies to enhance problem-solving capabilities. However, a critical question remains: How to intelligently and efficiently scale computational resources during testing. This paper presents the first comprehensive study on the prevalent issue of overthinking in these models, where long reasoning models generate redundant solutions that contribute minimally to accuracy and diversity, thereby wasting computational resources on simple problems with minimal benefit. We introduce novel efficiency metrics from both outcome and process perspectives to evaluate the rational use of computational resources by long reasoning models. Using a self-training paradigm, we propose strategies to mitigate overthinking, simplifying reasoning processes without compromising accuracy. Experimental results show that our approach successfully reduces computational overhead while preserving model performance across a range of testsets with varying difficulty levels, such as GSM8K, MATH500, GPQA, and AIME. Our code is open-source and available at https://github.com/galaxyChen/overthinking.
Cite
Text
Chen et al. "Do NOT Think That Much for 2+3=? on the Overthinking of Long Reasoning Models." Proceedings of the 42nd International Conference on Machine Learning, 2025.Markdown
[Chen et al. "Do NOT Think That Much for 2+3=? on the Overthinking of Long Reasoning Models." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/chen2025icml-think/)BibTeX
@inproceedings{chen2025icml-think,
title = {{Do NOT Think That Much for 2+3=? on the Overthinking of Long Reasoning Models}},
author = {Chen, Xingyu and Xu, Jiahao and Liang, Tian and He, Zhiwei and Pang, Jianhui and Yu, Dian and Song, Linfeng and Liu, Qiuzhi and Zhou, Mengfei and Zhang, Zhuosheng and Wang, Rui and Tu, Zhaopeng and Mi, Haitao and Yu, Dong},
booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
year = {2025},
pages = {9487-9499},
volume = {267},
url = {https://mlanthology.org/icml/2025/chen2025icml-think/}
}