VERSE: Verification-Based Self-Play for Code Instructions
Abstract
Instruction-tuned Code Large Language Models (Code LLMs) have excelled in diverse code-related tasks, such as program synthesis, automatic program repair, and code explanation. To collect training datasets for instruction-tuning, a popular method involves having models autonomously generate instructions and corresponding responses. However, the direct generation of responses does not ensure functional correctness, a crucial requirement for generating responses to code instructions. To overcome this, we present Verification-Based Self-Play (VERSE), aiming to enhance model proficiency in generating correct responses. VERSE establishes a robust verification framework that covers various code instructions. Employing VERSE, Code LLMs engage in self-play to generate instructions and corresponding verifications. They evaluate execution results and self-consistency as verification outcomes, using them as scores to rank generated data for self-training. Experiments show that VERSE improves multiple base Code LLMs (average 7.6%) across various languages and tasks on many benchmarks, affirming its effectiveness.
Cite
Text
Jiang et al. "VERSE: Verification-Based Self-Play for Code Instructions." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I23.34604Markdown
[Jiang et al. "VERSE: Verification-Based Self-Play for Code Instructions." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/jiang2025aaai-verse/) doi:10.1609/AAAI.V39I23.34604BibTeX
@inproceedings{jiang2025aaai-verse,
title = {{VERSE: Verification-Based Self-Play for Code Instructions}},
author = {Jiang, Hao and Liu, Qi and Li, Rui and Zhao, Yuze and Ma, Yixiao and Ye, Shengyu and Lu, Junyu and Su, Yu},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2025},
pages = {24276-24284},
doi = {10.1609/AAAI.V39I23.34604},
url = {https://mlanthology.org/aaai/2025/jiang2025aaai-verse/}
}