Reinforcing Neural Network Stability with Attractor Dynamics

Abstract

Recent approaches interpret deep neural works (DNNs) as dynamical systems, drawing the connection between stability in forward propagation and generalization of DNNs. In this paper, we take a step further to be the first to reinforce this stability of DNNs without changing their original structure and verify the impact of the reinforced stability on the network representation from various aspects. More specifically, we reinforce stability by modeling attractor dynamics of a DNN and propose relu-max attractor network (RMAN), a light-weight module readily to be deployed on state-of-the-art ResNet-like networks. RMAN is only needed during training so as to modify a ResNet's attractor dynamics by minimizing an energy function together with the loss of the original learning task. Through intensive experiments, we show that RMAN-modified attractor dynamics bring a more structured representation space to ResNet and its variants, and more importantly improve the generalization ability of ResNet-like networks in supervised tasks due to reinforced stability.

Cite

Text

Deng et al. "Reinforcing Neural Network Stability with Attractor Dynamics." AAAI Conference on Artificial Intelligence, 2020. doi:10.1609/AAAI.V34I04.5787

Markdown

[Deng et al. "Reinforcing Neural Network Stability with Attractor Dynamics." AAAI Conference on Artificial Intelligence, 2020.](https://mlanthology.org/aaai/2020/deng2020aaai-reinforcing/) doi:10.1609/AAAI.V34I04.5787

BibTeX

@inproceedings{deng2020aaai-reinforcing,
  title     = {{Reinforcing Neural Network Stability with Attractor Dynamics}},
  author    = {Deng, Hanming and Hua, Yang and Song, Tao and Xue, Zhengui and Ma, Ruhui and Robertson, Neil and Guan, Haibing},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2020},
  pages     = {3765-3772},
  doi       = {10.1609/AAAI.V34I04.5787},
  url       = {https://mlanthology.org/aaai/2020/deng2020aaai-reinforcing/}
}