Adversarial Attacks on Transformers-Based Malware Detectors

Abstract

Signature-based malware detectors have proven to be insufficient as even a small change in malignant executable code can bypass these signature-based detectors. Many machine learning-based models have been proposed to efficiently detect a wide variety of malware. Many of these models are found to be susceptible to adversarial attacks - attacks that work by generating intentionally designed inputs that can force these models to misclassify. Our work aims to explore vulnerabilities in the current state of the art malware detectors to adversarial attacks. We train a Transformers-based malware detector, carry out adversarial attacks resulting in a misclassification rate of 23.9% and propose defenses that reduce this misclassification rate to half. An implementation of our work can be found at https:// github.com/yashjakhotiya/Adversarial-Attacks-On-Transformers.

Cite

Text

Jakhotiya et al. "Adversarial Attacks on Transformers-Based Malware Detectors." NeurIPS 2022 Workshops: MLSW, 2022.

Markdown

[Jakhotiya et al. "Adversarial Attacks on Transformers-Based Malware Detectors." NeurIPS 2022 Workshops: MLSW, 2022.](https://mlanthology.org/neuripsw/2022/jakhotiya2022neuripsw-adversarial/)

BibTeX

@inproceedings{jakhotiya2022neuripsw-adversarial,
  title     = {{Adversarial Attacks on Transformers-Based Malware Detectors}},
  author    = {Jakhotiya, Yash and Patil, Heramb and Rawlani, Jugal and Mane, Sunil},
  booktitle = {NeurIPS 2022 Workshops: MLSW},
  year      = {2022},
  url       = {https://mlanthology.org/neuripsw/2022/jakhotiya2022neuripsw-adversarial/}
}