The Evolution of Gennetic Algorithms: Towards Massive Parallelism

Abstract

One of the issues in creating any search technique is balancing the need for diverse exploration with the desire for efficient focusing. This paper explores a genetic algorithm (GA) architecture which is more resilient to local optima than other recently introduced GA models, and which provides the ability to focus search quickly. The GA uses a fine-grain parallel architecture to simulate evolution more closely than previous models. In order to motivate the need for fine-grain parallelism, this paper will provide an overview of the two preceding phases of development: the traditional genetic algorithm, and the coarse-grain parallel GA. A test set of 15 problems is used to compare the effectiveness of a fine-grain parallel GA with that of a coarse-grain parallel GA.

Cite

Text

Baluja. "The Evolution of Gennetic Algorithms: Towards Massive Parallelism." International Conference on Machine Learning, 1993. doi:10.1016/B978-1-55860-307-3.50007-1

Markdown

[Baluja. "The Evolution of Gennetic Algorithms: Towards Massive Parallelism." International Conference on Machine Learning, 1993.](https://mlanthology.org/icml/1993/baluja1993icml-evolution/) doi:10.1016/B978-1-55860-307-3.50007-1

BibTeX

@inproceedings{baluja1993icml-evolution,
  title     = {{The Evolution of Gennetic Algorithms: Towards Massive Parallelism}},
  author    = {Baluja, Shumeet},
  booktitle = {International Conference on Machine Learning},
  year      = {1993},
  pages     = {1-8},
  doi       = {10.1016/B978-1-55860-307-3.50007-1},
  url       = {https://mlanthology.org/icml/1993/baluja1993icml-evolution/}
}