Contextual Augmented Multi-Model Programming (CAMP): A Local-Cloud Copilot Solution

Abstract

To bridge the strengths of cloud-based Large Language Models (LLMs) in code generation and the adaptability of locally integrated tools, we introduce CAMP, a collaborative multi-model copilot framework for AI-assisted programming. CAMP employs context-aware Retrieval-Augmented Generation (RAG), dynamically retrieving relevant information from local codebases to construct optimized prompts tailored for code generation tasks. This hybrid strategy enhances LLM effectiveness in local coding environments, yielding a 12.5% performance boost over non-contextual generation and a 6.3% gain compared to a baseline RAG implementation. We demonstrate the practical application of CAMP through "Copilot for Xcode," supporting tasks such as code completion, bug detection, and documentation generation. Its success led to integration with GitHub Copilot, underscoring the real-world impact and scalability of our approach in evolving AI-driven software development practices. This work was originally published as a full paper in IEEE CAI 2025. The current version is a concise presentation for this workshop, highlighting the key contributions and encouraging further discussion within the community.

Cite

Text

Wang et al. "Contextual Augmented Multi-Model Programming (CAMP): A Local-Cloud Copilot Solution." ICLR 2025 Workshops: DL4C, 2025.

Markdown

[Wang et al. "Contextual Augmented Multi-Model Programming (CAMP): A Local-Cloud Copilot Solution." ICLR 2025 Workshops: DL4C, 2025.](https://mlanthology.org/iclrw/2025/wang2025iclrw-contextual/)

BibTeX

@inproceedings{wang2025iclrw-contextual,
  title     = {{Contextual Augmented Multi-Model Programming (CAMP): A Local-Cloud Copilot Solution}},
  author    = {Wang, Yuchen and Guo, Shangxin and Tan, Chee Wei},
  booktitle = {ICLR 2025 Workshops: DL4C},
  year      = {2025},
  url       = {https://mlanthology.org/iclrw/2025/wang2025iclrw-contextual/}
}