Reward Based Hebbian Learning in Direct Feedback Alignment (Student Abstract)

Abstract

Imparting biological realism during the learning process is gaining attention towards producing computationally efficient algorithms without compromising the performance. Feedback alignment and mirror neuron concept are two such approaches where the feedback weight remains static in the former and update via Hebbian learning in the later. Though these approaches have proven to work efficiently for supervised learning, it remained unknown if the same can be applicable to reinforcement learning applications. Therefore, this study introduces RHebb-DFA where the reward-based Hebbian learning is used to update feedback weights in direct feedback alignment mode. This approach is validated on various Atari games and obtained equivalent performance in comparison with DDQN.

Cite

Text

Akella et al. "Reward Based Hebbian Learning in Direct Feedback Alignment (Student Abstract)." AAAI Conference on Artificial Intelligence, 2021. doi:10.1609/AAAI.V35I18.17871

Markdown

[Akella et al. "Reward Based Hebbian Learning in Direct Feedback Alignment (Student Abstract)." AAAI Conference on Artificial Intelligence, 2021.](https://mlanthology.org/aaai/2021/akella2021aaai-reward/) doi:10.1609/AAAI.V35I18.17871

BibTeX

@inproceedings{akella2021aaai-reward,
  title     = {{Reward Based Hebbian Learning in Direct Feedback Alignment (Student Abstract)}},
  author    = {Akella, Ashlesha and Singanamalla, Sai Kalyan Ranga and Lin, Chin-Teng},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2021},
  pages     = {15749-15750},
  doi       = {10.1609/AAAI.V35I18.17871},
  url       = {https://mlanthology.org/aaai/2021/akella2021aaai-reward/}
}