Adversarial Inputs for Linear Algebra Backends

Abstract

Linear algebra is a cornerstone of neural network inference. The efficiency of popular frameworks, such as TensorFlow and PyTorch, critically depends on backend libraries providing highly optimized matrix multiplications and convolutions. A diverse range of these backends exists across platforms, including Intel MKL, Nvidia CUDA, and Apple Accelerate. Although these backends provide equivalent functionality, subtle variations in their implementations can lead to seemingly negligible differences during inference. In this paper, we investigate these minor discrepancies and demonstrate how they can be selectively amplified by adversaries. Specifically, we introduce Chimera examples, inputs to models that elicit conflicting predictions depending on the employed backend library. These inputs can even be constructed with integer values, creating a vulnerability exploitable from real-world input domains. We analyze the prevalence and extent of the underlying attack surface and propose corresponding defenses to mitigate this threat.

Cite

Text

Möller et al. "Adversarial Inputs for Linear Algebra Backends." Proceedings of the 42nd International Conference on Machine Learning, 2025.

Markdown

[Möller et al. "Adversarial Inputs for Linear Algebra Backends." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/moller2025icml-adversarial/)

BibTeX

@inproceedings{moller2025icml-adversarial,
  title     = {{Adversarial Inputs for Linear Algebra Backends}},
  author    = {Möller, Jonas and Pirch, Lukas and Weissberg, Felix and Baunsgaard, Sebastian and Eisenhofer, Thorsten and Rieck, Konrad},
  booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
  year      = {2025},
  pages     = {44615-44626},
  volume    = {267},
  url       = {https://mlanthology.org/icml/2025/moller2025icml-adversarial/}
}