Gradient and Mangitude Based Pruning for Sparse Deep Neural Networks

Abstract

Deep Neural Networks have memory and computational demands that often render them difficult to use in low-resource environments. Also, highly dense networks are over-parameterized and thus prone to overfitting. To address these problems, we introduce a novel algorithm that prunes (sparsifies) weights from the network by taking into account their magnitudes and gradients taken against a validation dataset. Unlike existing pruning methods, our method does not require the network model to be retrained once initial training is completed. On the CIFAR-10 dataset, our method reduced the number of paramters of MobileNet by a factor of 9X, from 14 million to 1.5 million, with just a 3.8% drop in accuracy.

Cite

Text

Belay. "Gradient and Mangitude Based Pruning for Sparse Deep Neural Networks." AAAI Conference on Artificial Intelligence, 2022. doi:10.1609/AAAI.V36I11.21699

Markdown

[Belay. "Gradient and Mangitude Based Pruning for Sparse Deep Neural Networks." AAAI Conference on Artificial Intelligence, 2022.](https://mlanthology.org/aaai/2022/belay2022aaai-gradient/) doi:10.1609/AAAI.V36I11.21699

BibTeX

@inproceedings{belay2022aaai-gradient,
  title     = {{Gradient and Mangitude Based Pruning for Sparse Deep Neural Networks}},
  author    = {Belay, Kaleab},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2022},
  pages     = {13126-13127},
  doi       = {10.1609/AAAI.V36I11.21699},
  url       = {https://mlanthology.org/aaai/2022/belay2022aaai-gradient/}
}