Adversarially Robust Neural Networks via Optimal Control: Bridging Robustness with Lyapunov Stability
Abstract
Deep neural networks are known to be vulnerable to adversarial perturbations. In this paper, we bridge adversarial robustness of neural nets with Lyapunov stability of dynamical systems. From this viewpoint, training neural nets is equivalent to finding an optimal control of the discrete dynamical system, which allows one to utilize methods of successive approximations, an optimal control algorithm based on Pontryagin's maximum principle, to train neural nets. This decoupled training method allows us to add constraints to the optimization, which makes the deep model more robust. The constrained optimization problem can be formulated as a semi-definite programming problem and hence can be solved efficiently. Experiments show that our method effectively improves deep model's adversarial robustness.
Cite
Text
Chen and Su. "Adversarially Robust Neural Networks via Optimal Control: Bridging Robustness with Lyapunov Stability." International Conference on Learning Representations, 2020.Markdown
[Chen and Su. "Adversarially Robust Neural Networks via Optimal Control: Bridging Robustness with Lyapunov Stability." International Conference on Learning Representations, 2020.](https://mlanthology.org/iclr/2020/chen2020iclr-adversarially/)BibTeX
@inproceedings{chen2020iclr-adversarially,
title = {{Adversarially Robust Neural Networks via Optimal Control: Bridging Robustness with Lyapunov Stability}},
author = {Chen, Zhiyang and Su, Hang},
booktitle = {International Conference on Learning Representations},
year = {2020},
url = {https://mlanthology.org/iclr/2020/chen2020iclr-adversarially/}
}