What Do Hebbian Learners Learn? Reduction Axioms for Iterated Hebbian Learning
Abstract
This paper is a contribution to neural network semantics, a foundational framework for neuro-symbolic AI. The key insight of this theory is that logical operators can be mapped to operators on neural network states. In this paper, we do this for a neural network learning operator. We map a dynamic operator [φ] to iterated Hebbian learning, a simple learning policy that updates a neural network by repeatedly applying Hebb's learning rule until the net reaches a fixed-point. Our main result is that we can "translate away" [φ]-formulas via reduction axioms. This means that completeness for the logic of iterated Hebbian learning follows from completeness of the base logic. These reduction axioms also provide (1) a human-interpretable description of iterated Hebbian learning as a kind of plausibility upgrade, and (2) an approach to building neural networks with guarantees on what they can learn.
Cite
Text
Kisby et al. "What Do Hebbian Learners Learn? Reduction Axioms for Iterated Hebbian Learning." AAAI Conference on Artificial Intelligence, 2024. doi:10.1609/AAAI.V38I13.29409Markdown
[Kisby et al. "What Do Hebbian Learners Learn? Reduction Axioms for Iterated Hebbian Learning." AAAI Conference on Artificial Intelligence, 2024.](https://mlanthology.org/aaai/2024/kisby2024aaai-hebbian/) doi:10.1609/AAAI.V38I13.29409BibTeX
@inproceedings{kisby2024aaai-hebbian,
title = {{What Do Hebbian Learners Learn? Reduction Axioms for Iterated Hebbian Learning}},
author = {Kisby, Caleb Schultz and Blanco, Saúl A. and Moss, Lawrence S.},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2024},
pages = {14894-14901},
doi = {10.1609/AAAI.V38I13.29409},
url = {https://mlanthology.org/aaai/2024/kisby2024aaai-hebbian/}
}