Learning in Higher-Order "Artificial Dendritic Trees
Abstract
If neurons sum up their inputs in a non-linear way, as some simula(cid:173) tions suggest, how is this distributed fine-grained non-linearity ex(cid:173) ploited during learning? How are all the small sigmoids in synapse, spine and dendritic tree lined up in the right areas of their respective input spaces? In this report, I show how an abstract atemporal highly nested tree structure with a quadratic transfer function associated with each branchpoint, can self organise using only a single global reinforcement scalar, to perform binary classification tasks. The pro(cid:173) cedure works well, solving the 6-multiplexer and a difficult phoneme classification task as well as back-propagation does, and faster. Furthermore, it does not calculate an error gradient, but uses a statist(cid:173) ical scheme to build moving models of the reinforcement signal.
Cite
Text
Bell. "Learning in Higher-Order "Artificial Dendritic Trees." Neural Information Processing Systems, 1989.Markdown
[Bell. "Learning in Higher-Order "Artificial Dendritic Trees." Neural Information Processing Systems, 1989.](https://mlanthology.org/neurips/1989/bell1989neurips-learning/)BibTeX
@inproceedings{bell1989neurips-learning,
title = {{Learning in Higher-Order "Artificial Dendritic Trees}},
author = {Bell, Tony},
booktitle = {Neural Information Processing Systems},
year = {1989},
pages = {490-497},
url = {https://mlanthology.org/neurips/1989/bell1989neurips-learning/}
}