Shh, Don't Say That! Domain Certification in LLMs

Abstract

Large language models (LLMs) are often deployed to perform constrained tasks, with narrow domains. For example, customer support bots can be built on top of LLMs, relying on their broad language understanding and capabilities to enhance performance. However, these LLMs are adversarially susceptible, potentially generating outputs outside the intended domain. To formalize, assess and mitigate this risk, we introduce domain certification; a guarantee that accurately characterizes the out-of-domain behavior of language models. We then propose a simple yet effective approach which we call VALID that provides adversarial bounds as a certificate. Finally, we evaluate our method across a diverse set of datasets, demonstrating that it yields meaningful certificates, which bound the probability of out-of-domain samples tightly with minimum penalty to refusal behavior.

Cite

Text

Emde et al. "Shh, Don't Say That! Domain Certification in LLMs." ICLR 2025 Workshops: FM-Wild, 2025.

Markdown

[Emde et al. "Shh, Don't Say That! Domain Certification in LLMs." ICLR 2025 Workshops: FM-Wild, 2025.](https://mlanthology.org/iclrw/2025/emde2025iclrw-shh/)

BibTeX

@inproceedings{emde2025iclrw-shh,
  title     = {{Shh, Don't Say That! Domain Certification in LLMs}},
  author    = {Emde, Cornelius and Paren, Alasdair and Arvind, Preetham and Kayser, Maxime and Rainforth, Tom and Lukasiewicz, Thomas and Ghanem, Bernard and Torr, Philip and Bibi, Adel},
  booktitle = {ICLR 2025 Workshops: FM-Wild},
  year      = {2025},
  url       = {https://mlanthology.org/iclrw/2025/emde2025iclrw-shh/}
}