LLMGuard: Guarding Against Unsafe LLM Behavior

Abstract

Although the rise of Large Language Models (LLMs) in enterprise settings brings new opportunities and capabilities, it also brings challenges, such as the risk of generating inappropriate, biased, or misleading content that violates regulations and can have legal concerns. To alleviate this, we present "LLMGuard", a tool that monitors user interactions with an LLM application and flags content against specific behaviours or conversation topics. To do this robustly, LLMGuard employs an ensemble of detectors.

Cite

Text

Goyal et al. "LLMGuard: Guarding Against Unsafe LLM Behavior." AAAI Conference on Artificial Intelligence, 2024. doi:10.1609/AAAI.V38I21.30566

Markdown

[Goyal et al. "LLMGuard: Guarding Against Unsafe LLM Behavior." AAAI Conference on Artificial Intelligence, 2024.](https://mlanthology.org/aaai/2024/goyal2024aaai-llmguard/) doi:10.1609/AAAI.V38I21.30566

BibTeX

@inproceedings{goyal2024aaai-llmguard,
  title     = {{LLMGuard: Guarding Against Unsafe LLM Behavior}},
  author    = {Goyal, Shubh and Hira, Medha and Mishra, Shubham and Goyal, Sukriti and Goel, Arnav and Dadu, Niharika and B., Kirushikesh D. and Mehta, Sameep and Madaan, Nishtha},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2024},
  pages     = {23790-23792},
  doi       = {10.1609/AAAI.V38I21.30566},
  url       = {https://mlanthology.org/aaai/2024/goyal2024aaai-llmguard/}
}