Training Structured Neural Networks Through Manifold Identification and Variance Reduction

Abstract

This paper proposes an algorithm, RMDA, for training neural networks (NNs) with a regularization term for promoting desired structures. RMDA does not incur computation additional to proximal SGD with momentum, and achieves variance reduction without requiring the objective function to be of the finite-sum form. Through the tool of manifold identification from nonlinear optimization, we prove that after a finite number of iterations, all iterates of RMDA possess a desired structure identical to that induced by the regularizer at the stationary point of asymptotic convergence, even in the presence of engineering tricks like data augmentation that complicate the training process. Experiments on training NNs with structured sparsity confirm that variance reduction is necessary for such an identification, and show that RMDA thus significantly outperforms existing methods for this task. For unstructured sparsity, RMDA also outperforms a state-of-the-art pruning method, validating the benefits of training structured NNs through regularization. Implementation of RMDA is available at https://www.github.com/zihsyuan1214/rmda.

Cite

Text

Huang and Lee. "Training Structured Neural Networks Through Manifold Identification and Variance Reduction." International Conference on Learning Representations, 2022.

Markdown

[Huang and Lee. "Training Structured Neural Networks Through Manifold Identification and Variance Reduction." International Conference on Learning Representations, 2022.](https://mlanthology.org/iclr/2022/huang2022iclr-training/)

BibTeX

@inproceedings{huang2022iclr-training,
  title     = {{Training Structured Neural Networks Through Manifold Identification and Variance Reduction}},
  author    = {Huang, Zih-Syuan and Lee, Ching-pei},
  booktitle = {International Conference on Learning Representations},
  year      = {2022},
  url       = {https://mlanthology.org/iclr/2022/huang2022iclr-training/}
}