Do Users Write More Insecure Code with AI Assistants?
Abstract
We conduct the first large-scale user study examining how users interact with an AI Code assistant to solve a variety of security related tasks across different programming languages. Overall, we find that participants who had access to an AI assistant based on OpenAI’s \texttt{codex-davinci-002} model wrote less secure code than those without access. Additionally, participants with access to an AI assistant were more likely to believe they wrote secure code than those without access to the AI assistant. Furthermore, we find that participants who trusted the AI less and engaged more with the language and format of their prompts (e.g. re-phrasing, adjusting temperature) provided code with fewer security vulnerabilities. Finally, in order to better inform the design of future AI Assistants, we provide an in-depth analysis of participants’ language and interaction behavior, as well as release our user interface as an instrument to conduct similar studies in the future.
Cite
Text
Perry et al. "Do Users Write More Insecure Code with AI Assistants?." ICML 2023 Workshops: DeployableGenerativeAI, 2023.Markdown
[Perry et al. "Do Users Write More Insecure Code with AI Assistants?." ICML 2023 Workshops: DeployableGenerativeAI, 2023.](https://mlanthology.org/icmlw/2023/perry2023icmlw-users/)BibTeX
@inproceedings{perry2023icmlw-users,
title = {{Do Users Write More Insecure Code with AI Assistants?}},
author = {Perry, Neil and Srivastava, Megha and Kumar, Deepak and Boneh, Dan},
booktitle = {ICML 2023 Workshops: DeployableGenerativeAI},
year = {2023},
url = {https://mlanthology.org/icmlw/2023/perry2023icmlw-users/}
}