RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems

Abstract

Large Language Models (LLMs) have greatly advanced code auto-completion systems, with a potential for substantial productivity enhancements for developers. However, current benchmarks mainly focus on single-file tasks, leaving an assessment gap for more complex, real-world, multi-file programming scenarios. To fill this gap, we introduce RepoBench, a new benchmark specifically designed for evaluating repository-level code auto-completion systems. RepoBench consists of three interconnected evaluation tasks: RepoBench-R (Retrieval), RepoBench-C (Code Completion), and RepoBench-P (Pipeline). Each task respectively measures the system's ability to retrieve the most relevant code snippets from other files as cross-file context, predict the next line of code with cross-file and in-file context, and handle complex tasks that require a combination of both retrieval and next-line prediction. RepoBench aims to facilitate a more complete comparison of performance and encouraging continuous improvement in auto-completion systems. RepoBench is actively maintained with the latest code, serving as a live benchmark publicly available at https://github.com/Leolty/repobench.

Cite

Text

Liu et al. "RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems." International Conference on Learning Representations, 2024.

Markdown

[Liu et al. "RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems." International Conference on Learning Representations, 2024.](https://mlanthology.org/iclr/2024/liu2024iclr-repobench/)

BibTeX

@inproceedings{liu2024iclr-repobench,
  title     = {{RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems}},
  author    = {Liu, Tianyang and Xu, Canwen and McAuley, Julian},
  booktitle = {International Conference on Learning Representations},
  year      = {2024},
  url       = {https://mlanthology.org/iclr/2024/liu2024iclr-repobench/}
}