Robust Assessment of Real-World Adversarial Examples

Abstract

We explore rigorous, systematic, and controlled experimental evaluation of adversarial examples in the real world and propose a testing regimen for evaluation of real-world adversarial objects. We show that for small scene/ environmental perturbations, large adversarial performance differences exist. Current state of adversarial reporting exists largely as a frequency count over a dynamic collections of scenes. Our work underscores the need for either a more complete report or a score that incorporates scene changes and baseline performance for models and environments tested by adversarial developers. We put forth a score that attempts to address the above issues in a straightforward exemplar application for multiple generated adversary examples. We contribute the following: 1. a testbed for adversarial assessment, 2. a score for adversarial examples, and 3. a collection of additional evaluations on testbed data.

Cite

Text

Jefferson and Marrero. "Robust Assessment of Real-World Adversarial Examples." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020. doi:10.1109/CVPRW50498.2020.00404

Markdown

[Jefferson and Marrero. "Robust Assessment of Real-World Adversarial Examples." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020.](https://mlanthology.org/cvprw/2020/jefferson2020cvprw-robust/) doi:10.1109/CVPRW50498.2020.00404

BibTeX

@inproceedings{jefferson2020cvprw-robust,
  title     = {{Robust Assessment of Real-World Adversarial Examples}},
  author    = {Jefferson, Brett A. and Marrero, Carlos Ortiz},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2020},
  pages     = {3442-3449},
  doi       = {10.1109/CVPRW50498.2020.00404},
  url       = {https://mlanthology.org/cvprw/2020/jefferson2020cvprw-robust/}
}