Scaling-up Memristor Monte Carlo with Magnetic Domain-Wall Physics
Abstract
By exploiting the intrinsic random nature of nanoscale devices, Memristor Monte Carlo (MMC) is a promising enabler of edge learning systems. However, due to multiple algorithmic and device-level limitations, existing demonstrations have been restricted to very small neural network models and datasets. We discuss these limitations, and describe how they can be overcome, by mapping the stochastic gradient Langevin dynamics (SGLD) algorithm onto the physics of magnetic domain-wall Memristors to scale-up MMC models by five orders of magnitude. We propose the push-pull pulse programming method that realises SGLD in-physics, and use it to train a domain-wall based ResNet18 on the CIFAR-10 dataset. On this task, we observe no performance degradation relative to a floating point model down to an update precision of between 6 and 7-bits, indicating we have made a step towards a large-scale edge learning system leveraging noisy analogue devices.
Cite
Text
Dalgaty et al. "Scaling-up Memristor Monte Carlo with Magnetic Domain-Wall Physics." NeurIPS 2023 Workshops: MLNCP, 2023.Markdown
[Dalgaty et al. "Scaling-up Memristor Monte Carlo with Magnetic Domain-Wall Physics." NeurIPS 2023 Workshops: MLNCP, 2023.](https://mlanthology.org/neuripsw/2023/dalgaty2023neuripsw-scalingup/)BibTeX
@inproceedings{dalgaty2023neuripsw-scalingup,
title = {{Scaling-up Memristor Monte Carlo with Magnetic Domain-Wall Physics}},
author = {Dalgaty, Thomas and Yamada, Shogo and Molnos, Anca and Kawasaki, Eiji and Mesquida, Thomas and François, Rummens and Shibata, Tatsuo and Urakawa, Yukihiro and Terasaki, Yukio and Sasaki, Tomoyuki and Duranton, Marc},
booktitle = {NeurIPS 2023 Workshops: MLNCP},
year = {2023},
url = {https://mlanthology.org/neuripsw/2023/dalgaty2023neuripsw-scalingup/}
}