ML-Bench: Evaluating Large Language Models and Agents for Machine Learning Tasks on Repository-Level Code

Abstract

Despite Large Language Models (LLMs) achieving impressive results in code generation, significant challenges remain in automated ML development, particularly in utilizing existing ML repositories effectively. Also, recently, people have developed LLM agents that attempt to interact with repository code (e.g., resolving issues), prompting the need for end-to-end evaluations starting from environment setup to deploying the repository rather than merely generating code in already-configured environments. These two gaps have motivated our development of ML-Bench, a benchmark rooted in real-world ML applications that leverage existing code repositories. ML-Bench encompasses annotated 9,641 examples across 18 GitHub repositories, challenging LLMs to accommodate user-specified arguments and documentation intricacies effectively. To evaluate both LLMs and agents, two setups are employed: ML-Bench-L for assessing LLMs' text-to-code conversion within a predefined deployment environment, and ML-Bench-A for testing autonomous agents in an end-to-end task execution within a Linux sandbox environment. Our findings indicate that while GPT-4o leads with a Pass@5 rate surpassing 50%, there remains significant scope for improvement, highlighted by issues such as hallucinated outputs and difficulties with bash script generation. Notably, in the more demanding ML-Agent-Bench, GPT-4o achieves a 76.47% success rate, reflecting the efficacy of iterative action and feedback in complex task resolution. Our resources, including code, data, and models, are available at \url{https://anonymous.4open.science/r/ML-Bench}.

Cite

Text

Tang et al. "ML-Bench: Evaluating Large Language Models and Agents for Machine Learning Tasks on Repository-Level Code." ICLR 2025 Workshops: AgenticAI, 2025.

Markdown

[Tang et al. "ML-Bench: Evaluating Large Language Models and Agents for Machine Learning Tasks on Repository-Level Code." ICLR 2025 Workshops: AgenticAI, 2025.](https://mlanthology.org/iclrw/2025/tang2025iclrw-mlbench/)

BibTeX

@inproceedings{tang2025iclrw-mlbench,
  title     = {{ML-Bench: Evaluating Large Language Models and Agents for Machine Learning Tasks on Repository-Level Code}},
  author    = {Tang, Xiangru and Liu, Yuliang and Cai, Zefan and Shao, Daniel and Lu, Junjie and Zhang, Yichi and Deng, Zexuan and Hu, Helan and An, Kaikai and Huang, Ruijun and Si, Shuzheng and Sheng, Chen and Zhao, Haozhe and Chen, Liang and Liu, Tianyu and Qin, Yujia and Zhou, Wangchunshu and Zhao, Yilun and Jiang, Zhiwei and Chang, Baobao and Cohan, Arman and Gerstein, Mark},
  booktitle = {ICLR 2025 Workshops: AgenticAI},
  year      = {2025},
  url       = {https://mlanthology.org/iclrw/2025/tang2025iclrw-mlbench/}
}