Cracking the Code: Evaluating Zero-Shot Prompting Methods for Providing Programming Feedback
Abstract
We introduce an evaluation framework to assess the feedback given by large language models (LLMs) under different prompt engineering techniques and conduct a case study, systematically varying prompts to examine their influence on feedback quality for common programming errors in R. Our findings suggest that prompts recommending a stepwise approach improve precision, whereas omitting explicit details on which data to analyze can bolster error identification.
Cite
Text
Ippisch et al. "Cracking the Code: Evaluating Zero-Shot Prompting Methods for Providing Programming Feedback." ICLR 2025 Workshops: HAIC, 2025.Markdown
[Ippisch et al. "Cracking the Code: Evaluating Zero-Shot Prompting Methods for Providing Programming Feedback." ICLR 2025 Workshops: HAIC, 2025.](https://mlanthology.org/iclrw/2025/ippisch2025iclrw-cracking/)BibTeX
@inproceedings{ippisch2025iclrw-cracking,
title = {{Cracking the Code: Evaluating Zero-Shot Prompting Methods for Providing Programming Feedback}},
author = {Ippisch, Niklas and Haensch, Anna-Carolina and Herklotz, Markus and Simson, Jan and Beck, Jacob and Schierholz, Malte},
booktitle = {ICLR 2025 Workshops: HAIC},
year = {2025},
url = {https://mlanthology.org/iclrw/2025/ippisch2025iclrw-cracking/}
}