Handling Delay in Reinforcement Learning Caused by Parallel Computations of Neurons
Abstract
Biological neural networks operate in parallel, a feature that sets them apart from artificial neural networks and can significantly enhance inference speed. However, this parallelism introduces challenges: when each neuron operates asynchronously with a fixed execution time, an $N$-layer feed-forward neural network without skip connections experiences a delay of $N$ time-steps. While reducing the number of layers can decrease this delay, it also diminishes the network's expressivity. In this work, we investigate the balance between delay and expressivity in neural networks. In particular, we study different types of skip connections, such as residual connections, projections from every hidden representation to the action space, and projections from the observation to every hidden representation. We evaluate different architectures and show that those with skip connections exhibit strong performance across different neuron execution times, common reinforcement learning algorithms, and various environments, including four Mujoco environments and all MinAtar games. Additionally, we demonstrate that parallel execution of neurons can accelerate inference on standard modern hardware by 6-350\%.
Cite
Text
Anokhin et al. "Handling Delay in Reinforcement Learning Caused by Parallel Computations of Neurons." ICML 2024 Workshops: ARLET, 2024.Markdown
[Anokhin et al. "Handling Delay in Reinforcement Learning Caused by Parallel Computations of Neurons." ICML 2024 Workshops: ARLET, 2024.](https://mlanthology.org/icmlw/2024/anokhin2024icmlw-handling/)BibTeX
@inproceedings{anokhin2024icmlw-handling,
title = {{Handling Delay in Reinforcement Learning Caused by Parallel Computations of Neurons}},
author = {Anokhin, Ivan and Rishav, Rishav and Chung, Stephen and Rish, Irina and Kahou, Samira Ebrahimi},
booktitle = {ICML 2024 Workshops: ARLET},
year = {2024},
url = {https://mlanthology.org/icmlw/2024/anokhin2024icmlw-handling/}
}