Noisy Neural Network Compression for Analog Storage Devices
Abstract
Efficient compression and storage of neural network (NN) parameters is critical for resource-constrained, downstream machine learning applications. Although several methods for NN compression have been developed, there has been considerably less work in the efficient storage of NN weights. While analog storage devices are promising alternatives to digital systems, the fact that they are noisy presents challenges for model compression as slight perturbations of the weights may significantly compromise the network’s overall performance. In this work, we study an analog NVM array fabricated in hardware (Phase Change Memory (PCM)) and develop a variety of robust coding strategies for NN weights that work well in practice. We demonstrate the efficacy of our approach on MNIST and CIFAR-10 datasets for pruning and knowledge distillation.
Cite
Text
Isik et al. "Noisy Neural Network Compression for Analog Storage Devices." NeurIPS 2020 Workshops: DL-IG, 2020.Markdown
[Isik et al. "Noisy Neural Network Compression for Analog Storage Devices." NeurIPS 2020 Workshops: DL-IG, 2020.](https://mlanthology.org/neuripsw/2020/isik2020neuripsw-noisy/)BibTeX
@inproceedings{isik2020neuripsw-noisy,
title = {{Noisy Neural Network Compression for Analog Storage Devices}},
author = {Isik, Berivan and Choi, Kristy and Zheng, Xin and Wong, H.-S. Philip and Ermon, Stefano and Weissman, Tsachy and Alaghi, Armin},
booktitle = {NeurIPS 2020 Workshops: DL-IG},
year = {2020},
url = {https://mlanthology.org/neuripsw/2020/isik2020neuripsw-noisy/}
}