Pruning Neural Networks with Velocity-Constrained Optimization
Abstract
Pruning has gained prominence as a way to compress over-parameterized neural networks. While pruning can be understood as solving a sparsity-constrained optimization problem, pruning by di- rectly solving this problem has been relatively underexplored. In this paper, we propose a method to prune neural networks using the MJ algorithm, which interprets constrained optimization using the framework of velocity-constrained optimization. The experimental results show that our method can prune VGG19 and ResNet32 networks by more than 90% while preserving the high accuracy of the dense network.
Cite
Text
Oh et al. "Pruning Neural Networks with Velocity-Constrained Optimization." NeurIPS 2023 Workshops: OPT, 2023.Markdown
[Oh et al. "Pruning Neural Networks with Velocity-Constrained Optimization." NeurIPS 2023 Workshops: OPT, 2023.](https://mlanthology.org/neuripsw/2023/oh2023neuripsw-pruning/)BibTeX
@inproceedings{oh2023neuripsw-pruning,
title = {{Pruning Neural Networks with Velocity-Constrained Optimization}},
author = {Oh, Donghyun and Chung, Jinseok and Lee, Namhoon},
booktitle = {NeurIPS 2023 Workshops: OPT},
year = {2023},
url = {https://mlanthology.org/neuripsw/2023/oh2023neuripsw-pruning/}
}