Neural Network Optimization with Weight Evolution
Abstract
In contrast to magnitude pruning, which only checks the parameter values at the end of training and removes the insignificant ones, this paper introduces a new approach that estimates the importance of each parameter in a holistic way. The proposed method keeps track of the parameter values from the beginning until the last epoch and calculates a weighted average across the training, giving more weight to the parameter values closer to the completion of training. We have tested this method on popular deep neural networks like AlexNet, VGGNet, ResNet and DenseNet on benchmark datasets like CIFAR10 and Tiny ImageNet. The results show that our approach can achieve higher compression with less loss of accuracy compared to magnitude pruning.
Cite
Text
Belhaouari and Islam. "Neural Network Optimization with Weight Evolution." ICML 2023 Workshops: NCW, 2023.Markdown
[Belhaouari and Islam. "Neural Network Optimization with Weight Evolution." ICML 2023 Workshops: NCW, 2023.](https://mlanthology.org/icmlw/2023/belhaouari2023icmlw-neural/)BibTeX
@inproceedings{belhaouari2023icmlw-neural,
title = {{Neural Network Optimization with Weight Evolution}},
author = {Belhaouari, Samir Brahim and Islam, Ashhadul},
booktitle = {ICML 2023 Workshops: NCW},
year = {2023},
url = {https://mlanthology.org/icmlw/2023/belhaouari2023icmlw-neural/}
}