On Feasibility of Intent Obfuscating Attacks

Abstract

Intent obfuscation is a common tactic in adversarial situations, enabling the attacker to both manipulate the target system and avoid culpability. Surprisingly, it has rarely been implemented in adversarial attacks on machine learning systems. We are the first to propose incorporating intent obfuscation in generating adversarial examples for object detectors: by perturbing another non-overlapping object to disrupt the target object, the attacker hides their intended target. We conduct a randomized experiment on 5 prominent detectors---YOLOv3, SSD, RetinaNet, Faster R-CNN, and Cascade R-CNN---using both targeted and untargeted attacks and achieve success on all models and attacks. We analyze the success factors characterizing intent obfuscating attacks, including target object confidence and perturb object sizes. We then demonstrate that the attacker can exploit these success factors to increase success rates for all models and attacks. Finally, we discuss known defenses and legal repercussions.

Cite

Text

Li and Shafto. "On Feasibility of Intent Obfuscating Attacks." ICML 2023 Workshops: AdvML-Frontiers, 2023.

Markdown

[Li and Shafto. "On Feasibility of Intent Obfuscating Attacks." ICML 2023 Workshops: AdvML-Frontiers, 2023.](https://mlanthology.org/icmlw/2023/li2023icmlw-feasibility/)

BibTeX

@inproceedings{li2023icmlw-feasibility,
  title     = {{On Feasibility of Intent Obfuscating Attacks}},
  author    = {Li, ZhaoBin and Shafto, Patrick},
  booktitle = {ICML 2023 Workshops: AdvML-Frontiers},
  year      = {2023},
  url       = {https://mlanthology.org/icmlw/2023/li2023icmlw-feasibility/}
}