Improving Analog Neural Network Robustness: A Noise-Agnostic Approach with Explainable Regularizations
Abstract
This work tackles the critical challenge of mitigating "hardware noise" in deep analog neural networks, a major obstacle in advancing analog signal processing devices. We propose a comprehensive, hardware-agnostic solution to address both correlated and uncorrelated noise affecting the activation layers of deep neural models. The novelty of our approach lies in its ability to demystify the "black box" nature of noise-resilient networks by revealing the underlying mechanisms that reduce sensitivity to noise. In doing so, we introduce a new explainable regularization framework that harnesses these mechanisms to significantly enhance noise robustness in deep neural architectures, obtaining over 53\% accuracy improvement in noisy environments, when compared to models with standard training.
Cite
Text
Duque et al. "Improving Analog Neural Network Robustness: A Noise-Agnostic Approach with Explainable Regularizations." NeurIPS 2024 Workshops: MLNCP, 2024.Markdown
[Duque et al. "Improving Analog Neural Network Robustness: A Noise-Agnostic Approach with Explainable Regularizations." NeurIPS 2024 Workshops: MLNCP, 2024.](https://mlanthology.org/neuripsw/2024/duque2024neuripsw-improving/)BibTeX
@inproceedings{duque2024neuripsw-improving,
title = {{Improving Analog Neural Network Robustness: A Noise-Agnostic Approach with Explainable Regularizations}},
author = {Duque, Alice and Freire, Pedro and Manuylovich, Egor and Turitsyn, Sergei K. and Prilepsky, Jaroslaw E. and Stoliarov, Dmitrii},
booktitle = {NeurIPS 2024 Workshops: MLNCP},
year = {2024},
url = {https://mlanthology.org/neuripsw/2024/duque2024neuripsw-improving/}
}