Low-Resource Languages Jailbreak GPT-4
Abstract
AI safety training and red-teaming of large language models (LLMs) are measures to mitigate the generation of unsafe content. Our work exposes the inherent cross-lingual vulnerability of these safety mechanisms, resulting from the linguistic inequality of safety training data, by successfully circumventing GPT-4's safeguard through translating unsafe English inputs into low-resource languages. On the AdvBench benchmark, GPT-4 engages with the unsafe translated inputs and provides actionable items that can get the users towards their harmful goals 79% of the time, which is on par with or even surpassing state-of-the-art jailbreaking attacks. Other high-/mid-resource languages have significantly lower attack success rates, which suggests that the cross-lingual vulnerability mainly applies to low-resource languages. Previously, limited training on low-resource languages primarily affected speakers of those languages, causing technological disparities. However, our work highlights a crucial shift: this deficiency now poses a risk to all LLMs users. Publicly available translation APIs enable anyone to exploit LLMs' safety vulnerabilities. Therefore, our work calls for more holistic red-teaming efforts to develop robust multilingual safeguards with wide language coverage.
Cite
Text
Yong et al. "Low-Resource Languages Jailbreak GPT-4." NeurIPS 2023 Workshops: SoLaR, 2023.Markdown
[Yong et al. "Low-Resource Languages Jailbreak GPT-4." NeurIPS 2023 Workshops: SoLaR, 2023.](https://mlanthology.org/neuripsw/2023/yong2023neuripsw-lowresource/)BibTeX
@inproceedings{yong2023neuripsw-lowresource,
title = {{Low-Resource Languages Jailbreak GPT-4}},
author = {Yong, Zheng Xin and Menghini, Cristina and Bach, Stephen},
booktitle = {NeurIPS 2023 Workshops: SoLaR},
year = {2023},
url = {https://mlanthology.org/neuripsw/2023/yong2023neuripsw-lowresource/}
}