A Sublinear Adversarial Training Algorithm
Abstract
Adversarial training is a widely used strategy for making neural networks resistant to adversarial perturbations. For a neural network of width $m$, $n$ input training data in $d$ dimension, it takes $\Omega(mnd)$ time cost per training iteration for the forward and backward computation. In this paper we analyze the convergence guarantee of adversarial training procedure on a two-layer neural network with shifted ReLU activation, and shows that only $o(m)$ neurons will be activated for each input data per iteration. Furthermore, we develop an algorithm for adversarial training with time cost $o(m n d)$ per iteration by applying half-space reporting data structure.
Cite
Text
Gao et al. "A Sublinear Adversarial Training Algorithm." International Conference on Learning Representations, 2024.Markdown
[Gao et al. "A Sublinear Adversarial Training Algorithm." International Conference on Learning Representations, 2024.](https://mlanthology.org/iclr/2024/gao2024iclr-sublinear/)BibTeX
@inproceedings{gao2024iclr-sublinear,
title = {{A Sublinear Adversarial Training Algorithm}},
author = {Gao, Yeqi and Qin, Lianke and Song, Zhao and Wang, Yitan},
booktitle = {International Conference on Learning Representations},
year = {2024},
url = {https://mlanthology.org/iclr/2024/gao2024iclr-sublinear/}
}