No Stress No Gain: Stress Testing Based Self-Consistency for Olympiad Programming
Abstract
We introduce a stress testing approach to improve performance of large language reasoning models on challenging competitive programming problems. By combining stress testing—inspired from a technique commonly used by expert programmers—with self-consistency and self-debugging methods, we demonstrate significant improvements in solution accuracy. Our method generates multiple brute-force solutions to validate and filter candidate solutions, leading to better performance than traditional majority voting approaches. Experimental results show that our approach successfully narrows the gap between pass@k and majority voting scores on the USACO benchmark for both o1-mini and o3-mini models, solving up to 246 out of 307 problems which is 17 more than the vanilla self-consistency.
Cite
Text
Singh et al. "No Stress No Gain: Stress Testing Based Self-Consistency for Olympiad Programming." ICLR 2025 Workshops: VerifAI, 2025.Markdown
[Singh et al. "No Stress No Gain: Stress Testing Based Self-Consistency for Olympiad Programming." ICLR 2025 Workshops: VerifAI, 2025.](https://mlanthology.org/iclrw/2025/singh2025iclrw-stress/)BibTeX
@inproceedings{singh2025iclrw-stress,
title = {{No Stress No Gain: Stress Testing Based Self-Consistency for Olympiad Programming}},
author = {Singh, Kunal and Bhowmick, Sayandeep and Moturi, Pradeep and Gollapalli, Siva Kishore},
booktitle = {ICLR 2025 Workshops: VerifAI},
year = {2025},
url = {https://mlanthology.org/iclrw/2025/singh2025iclrw-stress/}
}