Efficient Non-Linear Control Through Neuroevolution

Abstract

Many complex control problems are not amenable to traditional controller design. Not only is it difficult to model real systems, but often it is unclear what kind of behavior is required. Reinforcement learning (RL) has made progress through direct interaction with the task environment, but it has been difficult to scale it up to large and partially observable state spaces. In recent years, neuroevolution, the artificial evolution of neural networks, has shown promise in tasks with these two properties. This paper introduces a novel neuroevolution method called CoSyNE that evolves networks at the level of weights. In the most extensive comparison of RL methods to date, it was tested in difficult versions of the pole-balancing problem that involve large state spaces and hidden state. CoSyNE was found to be significantly more efficient and powerful than the other methods on these tasks, forming a promising foundation for solving challenging real-world control tasks.

Cite

Text

Gomez et al. "Efficient Non-Linear Control Through Neuroevolution." European Conference on Machine Learning, 2006. doi:10.1007/11871842_64

Markdown

[Gomez et al. "Efficient Non-Linear Control Through Neuroevolution." European Conference on Machine Learning, 2006.](https://mlanthology.org/ecmlpkdd/2006/gomez2006ecml-efficient/) doi:10.1007/11871842_64

BibTeX

@inproceedings{gomez2006ecml-efficient,
  title     = {{Efficient Non-Linear Control Through Neuroevolution}},
  author    = {Gomez, Faustino J. and Schmidhuber, Jürgen and Miikkulainen, Risto},
  booktitle = {European Conference on Machine Learning},
  year      = {2006},
  pages     = {654-662},
  doi       = {10.1007/11871842_64},
  url       = {https://mlanthology.org/ecmlpkdd/2006/gomez2006ecml-efficient/}
}