QiMeng-CodeV-R1: Reasoning-Enhanced Verilog Generation
Abstract
Large language models (LLMs) trained via reinforcement learning with verifiable reward (RLVR) have achieved breakthroughs on tasks with explicit, automatable verification, such as software programming and mathematical problems. Extending RLVR to electronic design automation (EDA), especially automatically generating hardware description languages (HDLs) like Verilog from natural-language (NL) specifications, however, poses three key challenges: the lack of automated and accurate verification environments, the scarcity of high-quality NL-code pairs, and the prohibitive computation cost of RLVR. To this end, we introduce CodeV-R1, an RLVR framework for training Verilog generation LLMs. First, we develop a rule-based testbench generator that performs robust equivalence checking against golden references. Second, we propose a round-trip data synthesis method that pairs open-source Verilog snippets with LLM-generated NL descriptions, verifies code–NL–code consistency via the generated testbench, and filters out inequivalent examples to yield a high-quality dataset. Third, we employ a two-stage "distill-then-RL" training pipeline: distillation for the cold start of reasoning abilities, followed by adaptive DAPO, our novel RLVR algorithm that can reduce training cost by adaptively adjusting sampling rate. The resulting model, CodeV-R1-7B, achieves 68.6 \% and 72.9 \% pass@1 on VerilogEval v2 and RTLLM v1.1, respectively, surpassing prior state-of-the-art by 12$\sim$20 \%, while even exceeding the performance of 671B DeepSeek-R1 on RTLLM. We have released our model, training code, and dataset to facilitate research in EDA and LLM communities.
Cite
Text
Zhu et al. "QiMeng-CodeV-R1: Reasoning-Enhanced Verilog Generation." Advances in Neural Information Processing Systems, 2025.Markdown
[Zhu et al. "QiMeng-CodeV-R1: Reasoning-Enhanced Verilog Generation." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/zhu2025neurips-qimengcodevr1/)BibTeX
@inproceedings{zhu2025neurips-qimengcodevr1,
title = {{QiMeng-CodeV-R1: Reasoning-Enhanced Verilog Generation}},
author = {Zhu, Yaoyu and Huang, Di and Lyu, Hanqi and Zhang, Xiaoyun and Li, Chongxiao and Shi, Wenxuan and Wu, Yutong and Mu, Jianan and Wang, Jinghua and Zhao, Yang and Jin, Pengwei and Cheng, Shuyao and Liang, Shengwen and Zhang, Xishan and Zhang, Rui and Du, Zidong and Guo, Qi and Hu, Xing and Chen, Yunji},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/zhu2025neurips-qimengcodevr1/}
}