Mathematical Theory of Adversarial Deep Learning
Abstract
In this Show-and-Tell Demos paper, progresses on mathematical theories for adversarial deep learning are reported. Firstly, achieving robust memorization for certain neural networks is shown to be an NP-hard problem. Furthermore, neural networks with $O(Nn)$ parameters are constructed for optimal robust memorization of any dataset with dimension $n$ and size $N$ in polynomial time. Secondly, adversarial training is formulated as a Stackelberg game and is shown to result in a network with optimal adversarial accuracy when the Carlini-Wagner's margin loss is used. Finally, the bias classifier is introduced and is shown to be information-theoretically secure against the original-model gradient-based attack.
Cite
Text
Gao et al. "Mathematical Theory of Adversarial Deep Learning." ICML 2023 Workshops: AdvML-Frontiers, 2023.Markdown
[Gao et al. "Mathematical Theory of Adversarial Deep Learning." ICML 2023 Workshops: AdvML-Frontiers, 2023.](https://mlanthology.org/icmlw/2023/gao2023icmlw-mathematical/)BibTeX
@inproceedings{gao2023icmlw-mathematical,
title = {{Mathematical Theory of Adversarial Deep Learning}},
author = {Gao, Xiao-Shan and Yu, Lijia and Liu, Shuang},
booktitle = {ICML 2023 Workshops: AdvML-Frontiers},
year = {2023},
url = {https://mlanthology.org/icmlw/2023/gao2023icmlw-mathematical/}
}