Reducing Classifier Overconfidence Against Adversaries Through Graph Algorithms

Abstract

In this work we show that deep learning classifiers tend to become overconfident in their answers under adversarial attacks, even when the classifier is optimized to survive such attacks. Our work draws upon stochastic geometry and graph algorithms to propose a general framework to replace the last fully connected layer and softmax output. This framework (a) can be applied to any classifier and (b) significantly reduces the classifier’s overconfidence in its output without much of an impact on its accuracy when compared to original adversarially-trained classifiers. Its relative effectiveness increases as the attacker becomes more powerful. Our use of graph algorithms in adversarial learning is new and of independent interest. Finally, we show the advantages of this last-layer softmax replacement over image tasks under common adversarial attacks.

Cite

Text

Teixeira et al. "Reducing Classifier Overconfidence Against Adversaries Through Graph Algorithms." Machine Learning, 2023. doi:10.1007/S10994-023-06307-Y

Markdown

[Teixeira et al. "Reducing Classifier Overconfidence Against Adversaries Through Graph Algorithms." Machine Learning, 2023.](https://mlanthology.org/mlj/2023/teixeira2023mlj-reducing/) doi:10.1007/S10994-023-06307-Y

BibTeX

@article{teixeira2023mlj-reducing,
  title     = {{Reducing Classifier Overconfidence Against Adversaries Through Graph Algorithms}},
  author    = {Teixeira, Leonardo and Jalaian, Brian and Ribeiro, Bruno},
  journal   = {Machine Learning},
  year      = {2023},
  pages     = {2619-2651},
  doi       = {10.1007/S10994-023-06307-Y},
  volume    = {112},
  url       = {https://mlanthology.org/mlj/2023/teixeira2023mlj-reducing/}
}