Analog VLSI Implementation of Multi-Dimensional Gradient Descent
Abstract
We describe an analog VLSI implementation of a multi-dimensional gradient estimation and descent technique for minimizing an on(cid:173) chip scalar function fO. The implementation uses noise injec(cid:173) tion and multiplicative correlation to estimate derivatives, as in [Anderson, Kerns 92]. One intended application of this technique is setting circuit parameters on-chip automatically, rather than manually [Kirk 91]. Gradient descent optimization may be used to adjust synapse weights for a backpropagation or other on-chip learning implementation. The approach combines the features of continuous multi-dimensional gradient descent and the potential for an annealing style of optimization. We present data measured from our analog VLSI implementation.
Cite
Text
Kirk et al. "Analog VLSI Implementation of Multi-Dimensional Gradient Descent." Neural Information Processing Systems, 1992.Markdown
[Kirk et al. "Analog VLSI Implementation of Multi-Dimensional Gradient Descent." Neural Information Processing Systems, 1992.](https://mlanthology.org/neurips/1992/kirk1992neurips-analog/)BibTeX
@inproceedings{kirk1992neurips-analog,
title = {{Analog VLSI Implementation of Multi-Dimensional Gradient Descent}},
author = {Kirk, David B. and Kerns, Douglas and Fleischer, Kurt and Barr, Alan H.},
booktitle = {Neural Information Processing Systems},
year = {1992},
pages = {789-796},
url = {https://mlanthology.org/neurips/1992/kirk1992neurips-analog/}
}