OptiBench Meets ReSocratic: Measure and Improve LLMs for Optimization Modeling
Abstract
Large language models (LLMs) have exhibited their problem-solving abilities in mathematical reasoning. Solving realistic optimization (OPT) problems in application scenarios requires advanced and applied mathematics ability. However, current OPT benchmarks that merely solve linear programming are far from complex realistic situations. In this work, we propose **OptiBench**, a benchmark for End-to-end optimization problem-solving with human-readable inputs and outputs. **OptiBench** contains rich optimization problems, including linear and nonlinear programming with or without tabular data, which can comprehensively evaluate LLMs' solving ability. In our benchmark, LLMs are required to call a code solver to provide precise numerical answers. Furthermore, to alleviate the data scarcity for optimization problems, and to bridge the gap between open-source LLMs on a small scale (e.g., Llama-3-8b) and closed-source LLMs (e.g., GPT-4), we further propose a data synthesis method namely ***ReSocratic***. Unlike general data synthesis methods that proceed from questions to answers, \ReSocratic first incrementally synthesizes formatted optimization demonstration with mathematical formulations step by step and then back-translates the generated demonstrations into questions. Based on this, we synthesize the ***ReSocratic-29k*** dataset. We further conduct supervised fine-tuning with ***ReSocratic-29k*** on multiple open-source models. Experimental results show that ***ReSocratic-29k*** significantly improves the performance of open-source models.
Cite
Text
Yang et al. "OptiBench Meets ReSocratic: Measure and Improve LLMs for Optimization Modeling." International Conference on Learning Representations, 2025.Markdown
[Yang et al. "OptiBench Meets ReSocratic: Measure and Improve LLMs for Optimization Modeling." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/yang2025iclr-optibench/)BibTeX
@inproceedings{yang2025iclr-optibench,
title = {{OptiBench Meets ReSocratic: Measure and Improve LLMs for Optimization Modeling}},
author = {Yang, Zhicheng and Wang, Yiwei and Huang, Yinya and Guo, Zhijiang and Shi, Wei and Han, Xiongwei and Feng, Liang and Song, Linqi and Liang, Xiaodan and Tang, Jing},
booktitle = {International Conference on Learning Representations},
year = {2025},
url = {https://mlanthology.org/iclr/2025/yang2025iclr-optibench/}
}