Attacks on Third-Party APIs of Large Language Models
Abstract
Large language model (LLM) services have recently begun offering a plugin ecosystem to interact with third-party API services. This innovation enhances the capabilities of LLMs but introduces risks since these plugins, developed by various third parties, cannot be easily trusted. This paper proposes a new attacking framework to examine security and safety vulnerabilities within LLM platforms that incorporate third-party services. Applying our framework specifically to widely used LLMs, we identify real-world malicious attacks across various domains on third-party APIs that can imperceptibly modify LLM outputs. The paper discusses the unique challenges posed by third-party API integration and offers strategic possibilities to improve the security and safety of LLM ecosystems moving forward.
Cite
Text
Zhao et al. "Attacks on Third-Party APIs of Large Language Models." ICLR 2024 Workshops: SeT_LLM, 2024.Markdown
[Zhao et al. "Attacks on Third-Party APIs of Large Language Models." ICLR 2024 Workshops: SeT_LLM, 2024.](https://mlanthology.org/iclrw/2024/zhao2024iclrw-attacks/)BibTeX
@inproceedings{zhao2024iclrw-attacks,
title = {{Attacks on Third-Party APIs of Large Language Models}},
author = {Zhao, Wanru and Khazanchi, Vidit and Xing, Haodi and He, Xuanli and Xu, Qiongkai and Lane, Nicholas Donald},
booktitle = {ICLR 2024 Workshops: SeT_LLM},
year = {2024},
url = {https://mlanthology.org/iclrw/2024/zhao2024iclrw-attacks/}
}