Adversarially Robust Learning via Entropic Regularization

Abstract

In this paper we propose a new family of algorithms, ATENT, for training adversarially robust deep neural networks. We formulate a new loss function that is equipped with an entropic regularization. Our loss considers the contribution of adversarial samples that are drawn from a specially designed distribution that assigns high probability to points with high loss and in the immediate neighborhood of training samples. ATENT achieves competitive (or better) performance in terms of robust classification accuracy as compared to several state-of-the-art robust learning approaches on benchmark datasets such as MNIST and CIFAR-10.

Cite

Text

Jagatap et al. "Adversarially Robust Learning via Entropic Regularization." ICML 2021 Workshops: AML, 2021.

Markdown

[Jagatap et al. "Adversarially Robust Learning via Entropic Regularization." ICML 2021 Workshops: AML, 2021.](https://mlanthology.org/icmlw/2021/jagatap2021icmlw-adversarially/)

BibTeX

@inproceedings{jagatap2021icmlw-adversarially,
  title     = {{Adversarially Robust Learning via Entropic Regularization}},
  author    = {Jagatap, Gauri and Joshi, Ameya and Chowdhury, Animesh Basak and Garg, Siddharth and Hegde, Chinmay},
  booktitle = {ICML 2021 Workshops: AML},
  year      = {2021},
  url       = {https://mlanthology.org/icmlw/2021/jagatap2021icmlw-adversarially/}
}