Prompting as Scientific Inquiry
Abstract
Prompting is the primary method by which we study and control large language models. It is also one of the most powerful: nearly every major capability attributed to LLMs—few-shot learning, chain-of-thought, constitutional AI—was first unlocked through prompting. Yet prompting is rarely treated as science and is frequently frowned upon as alchemy. We argue that this is a category error. If we treat LLMs as a new kind of organism—complex, opaque, and trained rather than programmed—then prompting is not a workaround. It is behavioral science. Mechanistic interpretability peers into the neural substrate, prompting probes the model in its native interface: language. We argue that prompting is not inferior, but rather a key component in the science of LLMs.
Cite
Text
Holtzman and Tan. "Prompting as Scientific Inquiry." Advances in Neural Information Processing Systems, 2025.Markdown
[Holtzman and Tan. "Prompting as Scientific Inquiry." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/holtzman2025neurips-prompting/)BibTeX
@inproceedings{holtzman2025neurips-prompting,
title = {{Prompting as Scientific Inquiry}},
author = {Holtzman, Ari and Tan, Chenhao},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/holtzman2025neurips-prompting/}
}