Temporal-Difference Learning Using Distributed Error Signals

Abstract

A computational problem in biological reward-based learning is how credit assignment is performed in the nucleus accumbens (NAc). Much research suggests that NAc dopamine encodes temporal-difference (TD) errors for learning value predictions. However, dopamine is synchronously distributed in regionally homogeneous concentrations, which does not support explicit credit assignment (like used by backpropagation). It is unclear whether distributed errors alone are sufficient for synapses to make coordinated updates to learn complex, nonlinear reward-based learning tasks. We design a new deep Q-learning algorithm, Artificial Dopamine, to computationally demonstrate that synchronously distributed, per-layer TD errors may be sufficient to learn surprisingly complex RL tasks. We empirically evaluate our algorithm on MinAtar, the DeepMind Control Suite, and classic control tasks, and show it often achieves comparable performance to deep RL algorithms that use backpropagation.

Cite

Text

Guan et al. "Temporal-Difference Learning Using Distributed Error Signals." Neural Information Processing Systems, 2024. doi:10.52202/079017-3452

Markdown

[Guan et al. "Temporal-Difference Learning Using Distributed Error Signals." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/guan2024neurips-temporaldifference/) doi:10.52202/079017-3452

BibTeX

@inproceedings{guan2024neurips-temporaldifference,
  title     = {{Temporal-Difference Learning Using Distributed Error Signals}},
  author    = {Guan, Jonas and Verch, Shon Eduard and Voelcker, Claas and Jackson, Ethan C. and Papernot, Nicolas and Cunningham, William A.},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-3452},
  url       = {https://mlanthology.org/neurips/2024/guan2024neurips-temporaldifference/}
}