How to Prompt LLMs for Text-to-SQL: A Study in Zero-Shot, Single-Domain, and Cross-Domain Settings

Abstract

Large language models (LLMs) with in-context learning have demonstrated remarkable capability in the text-to-SQL task. Previous research has prompted LLMs with various demonstration-retrieval strategies and intermediate reasoning steps to enhance the performance of LLMs. However, those works often employ varied strategies when constructing the prompt text for text-to-SQL inputs, such as databases and demonstration examples. This leads to a lack of comparability in both the prompt constructions and their primary contributions. Furthermore, selecting an effective prompt construction has emerged as a persistent problem for future research. To address this limitation, we comprehensively investigate the impact of prompt constructions across various settings and provide insights into prompt constructions for future text-to-SQL studies.

Cite

Text

Chang and Fosler-Lussier. "How to Prompt LLMs for Text-to-SQL: A Study in Zero-Shot, Single-Domain, and Cross-Domain Settings." NeurIPS 2023 Workshops: TRL, 2023.

Markdown

[Chang and Fosler-Lussier. "How to Prompt LLMs for Text-to-SQL: A Study in Zero-Shot, Single-Domain, and Cross-Domain Settings." NeurIPS 2023 Workshops: TRL, 2023.](https://mlanthology.org/neuripsw/2023/chang2023neuripsw-prompt/)

BibTeX

@inproceedings{chang2023neuripsw-prompt,
  title     = {{How to Prompt LLMs for Text-to-SQL: A Study in Zero-Shot, Single-Domain, and Cross-Domain Settings}},
  author    = {Chang, Shuaichen and Fosler-Lussier, Eric},
  booktitle = {NeurIPS 2023 Workshops: TRL},
  year      = {2023},
  url       = {https://mlanthology.org/neuripsw/2023/chang2023neuripsw-prompt/}
}