When Your AI Becomes a Target: AI Security Incidents and Best Practices

Abstract

In contrast to vast academic efforts to study AI security, few real-world reports of AI security incidents exist. Released incidents prevent a thorough investigation of the attackers' motives, as crucial information about the company and AI application is missing. As a consequence, it often remains unknown how to avoid incidents. We tackle this gap and combine previous reports with freshly collected incidents to a small database of 32 AI security incidents. We analyze the attackers' target and goal, influencing factors, causes, and mitigations. Many incidents stem from non-compliance with best practices in security and privacy-enhancing technologies. In the case of direct AI attacks, access control may provide some mitigation, but there is little scientific work on best practices. Our paper is thus a call for action to address these gaps.

Cite

Text

Grosse et al. "When Your AI Becomes a Target: AI Security Incidents and Best Practices." AAAI Conference on Artificial Intelligence, 2024. doi:10.1609/AAAI.V38I21.30347

Markdown

[Grosse et al. "When Your AI Becomes a Target: AI Security Incidents and Best Practices." AAAI Conference on Artificial Intelligence, 2024.](https://mlanthology.org/aaai/2024/grosse2024aaai-your/) doi:10.1609/AAAI.V38I21.30347

BibTeX

@inproceedings{grosse2024aaai-your,
  title     = {{When Your AI Becomes a Target: AI Security Incidents and Best Practices}},
  author    = {Grosse, Kathrin and Bieringer, Lukas and Besold, Tarek R. and Biggio, Battista and Alahi, Alexandre},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2024},
  pages     = {23041-23046},
  doi       = {10.1609/AAAI.V38I21.30347},
  url       = {https://mlanthology.org/aaai/2024/grosse2024aaai-your/}
}