EvoRainbow: Combining Improvements in Evolutionary Reinforcement Learning for Policy Search

Abstract

Both Evolutionary Algorithms (EAs) and Reinforcement Learning (RL) have demonstrated powerful capabilities in policy search with different principles. A promising direction is to combine the respective strengths of both for efficient policy optimization. To this end, many works have proposed various mechanisms to integrate EAs and RL. However, it is still unclear which of these mechanisms are complementary and can be fully combined. In this paper, we revisit different mechanisms from five perspectives: 1) Interaction Mode, 2) Individual Architecture, 3) EAs and operators, 4) Impact of EA on RL, and 5) Fitness Surrogate and Usage. We evaluate the effectiveness of each mechanism and experimentally analyze the reasons for the more effective mechanisms. Using the most effective mechanisms, we develop EvoRainbow and EvoRainbow-Exp, which outperform strong baselines and provide state-of-the-art performance across various tasks with distinct characteristics. To promote community development, we release the code on https://github.com/yeshenpy/EvoRainbow.

Cite

Text

Li et al. "EvoRainbow: Combining Improvements in Evolutionary Reinforcement Learning for Policy Search." International Conference on Machine Learning, 2024.

Markdown

[Li et al. "EvoRainbow: Combining Improvements in Evolutionary Reinforcement Learning for Policy Search." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/li2024icml-evorainbow/)

BibTeX

@inproceedings{li2024icml-evorainbow,
  title     = {{EvoRainbow: Combining Improvements in Evolutionary Reinforcement Learning for Policy Search}},
  author    = {Li, Pengyi and Zheng, Yan and Tang, Hongyao and Fu, Xian and Hao, Jianye},
  booktitle = {International Conference on Machine Learning},
  year      = {2024},
  pages     = {29427-29447},
  volume    = {235},
  url       = {https://mlanthology.org/icml/2024/li2024icml-evorainbow/}
}