Assistive Signals for Deep Neural Network Classifiers

Abstract

Deep Neural Networks are brittle in that small changes in the input can drastically affect their prediction outcome and confidence. Consequently, research in this area mainly focus on adversarial attacks and defenses. In this paper, we take an alternative stance and introduce the concept of Assistive Signals, which are perturbations optimized to improve a model’s confidence score regardless if it’s under attack or not. We analyze some interesting properties of these assistive perturbations and extend the idea to optimize them in the 3D space simulating different lighting conditions and viewing angles. Experimental evaluations show that the assistive signals generated by our optimization method increase the accuracy and confidence of deep models more than those generated by conventional methods that work in the 2D space. ‘Assistive Signals’ also illustrate bias of ML models towards certain patterns in real-life objects.

Cite

Text

Pestana et al. "Assistive Signals for Deep Neural Network Classifiers." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2021. doi:10.1109/CVPRW53098.2021.00133

Markdown

[Pestana et al. "Assistive Signals for Deep Neural Network Classifiers." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2021.](https://mlanthology.org/cvprw/2021/pestana2021cvprw-assistive/) doi:10.1109/CVPRW53098.2021.00133

BibTeX

@inproceedings{pestana2021cvprw-assistive,
  title     = {{Assistive Signals for Deep Neural Network Classifiers}},
  author    = {Pestana, Camilo and Liu, Wei and Glance, David G. and Owens, Robyn A. and Mian, Ajmal},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2021},
  pages     = {1221-1225},
  doi       = {10.1109/CVPRW53098.2021.00133},
  url       = {https://mlanthology.org/cvprw/2021/pestana2021cvprw-assistive/}
}