On Pretraining for Project-Level Code Completion

Abstract

Repository-level pretrainingis commonly used to enable large language models for code to leverage codebase-wide context. This enhances their ability to generate accurate and context-aware code completions. In this work, we investigate how different repository-processing strategies affect in-context learning in OpenCoder, a 1.5B-parameter model. We extend its context window from 4,096 to 16,384 tokens by training on additional 1B tokens of curated repository-level data. Despite relying on a smaller dataset than competing models (which often use hundreds of billions of tokens), our model achieves comparable performance on the Long Code Arena benchmark. We find that various repository-processing techniques yield similarly strong results, with the primary gain coming from adapting to a new rotary positional embedding (RoPE) scaling parameter. Finally, we show that a simpler file-level training approach at the original sequence length remains highly effective, opening up repository-level code completion research to settings with more constrained data and compute resources.

Cite

Text

Sapronov and Glukhov. "On Pretraining for Project-Level Code Completion." ICLR 2025 Workshops: DL4C, 2025.

Markdown

[Sapronov and Glukhov. "On Pretraining for Project-Level Code Completion." ICLR 2025 Workshops: DL4C, 2025.](https://mlanthology.org/iclrw/2025/sapronov2025iclrw-pretraining/)

BibTeX

@inproceedings{sapronov2025iclrw-pretraining,
  title     = {{On Pretraining for Project-Level Code Completion}},
  author    = {Sapronov, Maksim and Glukhov, Evgeniy},
  booktitle = {ICLR 2025 Workshops: DL4C},
  year      = {2025},
  url       = {https://mlanthology.org/iclrw/2025/sapronov2025iclrw-pretraining/}
}