GEMINI: Gradient Estimation Through Matrix Inversion After Noise Injection
Abstract
Learning procedures that measure how random perturbations of unit ac(cid:173) tivities correlate with changes in reinforcement are inefficient but simple to implement in hardware. Procedures like back-propagation (Rumelhart, Hinton and Williams, 1986) which compute how changes in activities af(cid:173) fect the output error are much more efficient, but require more complex hardware. GEMINI is a hybrid procedure for multilayer networks, which shares many of the implementation advantages of correlational reinforce(cid:173) ment procedures but is more efficient. GEMINI injects noise only at the first hidden layer and measures the resultant effect on the output error. A linear network associated with each hidden layer iteratively inverts the matrix which relates the noise to the error change, thereby obtaining the error-derivatives. No back-propagation is involved, thus allowing un(cid:173) known non-linearities in the system. Two simulations demonstrate the effectiveness of GEMINI.
Cite
Text
Le Cun et al. "GEMINI: Gradient Estimation Through Matrix Inversion After Noise Injection." Neural Information Processing Systems, 1988.Markdown
[Le Cun et al. "GEMINI: Gradient Estimation Through Matrix Inversion After Noise Injection." Neural Information Processing Systems, 1988.](https://mlanthology.org/neurips/1988/cun1988neurips-gemini/)BibTeX
@inproceedings{cun1988neurips-gemini,
title = {{GEMINI: Gradient Estimation Through Matrix Inversion After Noise Injection}},
author = {Le Cun, Yann and Galland, Conrad C. and Hinton, Geoffrey E.},
booktitle = {Neural Information Processing Systems},
year = {1988},
pages = {141-148},
url = {https://mlanthology.org/neurips/1988/cun1988neurips-gemini/}
}