SBSC: Step-by-Step Coding for Improving Mathematical Olympiad Performance
Abstract
We propose Step-by-Step Coding (SBSC): a multi-turn math reasoning framework that enables Large Language Models (LLMs) to generate sequence of programs for solving Olympiad level math problems. After each turn/step, by leveraging the code execution outputs and programs of previous steps, the model generates the next sub-task and the corresponding program to complete it. SBSC allows more granular, flexible and precise approach to problem-solving compared to existing methods. Extensive experiments highlight the effectiveness of SBSC in tackling competition and Olympiad-level math problems. For Claude-3.5-Sonnet, we observe SBSC (greedy decoding) surpasses existing state-of-the-art (SOTA) program generation based reasoning strategies by absolute 10.7% on AMC12, 8% on AIME and 12.6% on MathOdyssey. Given SBSC is multi-turn in nature, we also benchmark SBSC’s greedy decoding against self-consistency decoding results of existing SOTA math reasoning strategies and observe performance gain by absolute 6.2% on AMC, 6.7% on AIME and 7.4% on MathOdyssey.
Cite
Text
Singh et al. "SBSC: Step-by-Step Coding for Improving Mathematical Olympiad Performance." NeurIPS 2024 Workshops: MATH-AI, 2024.Markdown
[Singh et al. "SBSC: Step-by-Step Coding for Improving Mathematical Olympiad Performance." NeurIPS 2024 Workshops: MATH-AI, 2024.](https://mlanthology.org/neuripsw/2024/singh2024neuripsw-sbsc/)BibTeX
@inproceedings{singh2024neuripsw-sbsc,
title = {{SBSC: Step-by-Step Coding for Improving Mathematical Olympiad Performance}},
author = {Singh, Kunal and Biswas, Ankan and Bhowmick, Sayandeep and Moturi, Pradeep},
booktitle = {NeurIPS 2024 Workshops: MATH-AI},
year = {2024},
url = {https://mlanthology.org/neuripsw/2024/singh2024neuripsw-sbsc/}
}