A Case-Based Reasoning Approach to Dynamic Few-Shot Prompting for Code Generation
Abstract
Large language models have recently succeeded in various code generation tasks but still struggle with generating task plans for complex, real-world problems that need detailed, context-aware planning and execution. This work aims to enhance these models' accuracy in generating task plans from natural language instructions. These tasks plans, represented as python code, use custom functions to accomplish the user's request as specified in natural language. The task plans are multi-step, often include loops, and are executed in a python runtime environment. Our approach uses case-based reasoning to perform dynamic few-shot prompting to improve the large language models ability to accurately follow planning prompts. We compare the effectiveness of dynamic prompting with static three-shot and zero-shot prompting approaches finding that dynamic prompting improves the accuracy of the generated code. Additionally, we identify and discuss seven types of failures in code generation.
Cite
Text
Dannenhauer et al. "A Case-Based Reasoning Approach to Dynamic Few-Shot Prompting for Code Generation." ICML 2024 Workshops: LLMs_and_Cognition, 2024.Markdown
[Dannenhauer et al. "A Case-Based Reasoning Approach to Dynamic Few-Shot Prompting for Code Generation." ICML 2024 Workshops: LLMs_and_Cognition, 2024.](https://mlanthology.org/icmlw/2024/dannenhauer2024icmlw-casebased/)BibTeX
@inproceedings{dannenhauer2024icmlw-casebased,
title = {{A Case-Based Reasoning Approach to Dynamic Few-Shot Prompting for Code Generation}},
author = {Dannenhauer, Dustin and Dannenhauer, Zohreh and Christou, Despina and Hatalis, Kostas},
booktitle = {ICML 2024 Workshops: LLMs_and_Cognition},
year = {2024},
url = {https://mlanthology.org/icmlw/2024/dannenhauer2024icmlw-casebased/}
}