Evolving GPU Machine Code

Abstract

Parallel Graphics Processing Unit (GPU) implementations of GP have appeared in the literature using three main methodologies: (i) compilation, which generates the individuals in GPU code and requires compilation; (ii) pseudo-assembly, which generates the individuals in an intermediary assembly code and also requires compilation; and (iii) interpretation, which interprets the codes. This paper proposes a new methodology that uses the concepts of quantum computing and directly handles the GPU machine code instructions. Our methodology utilizes a probabilistic representation of an individual to improve the global search capability. In addition, the evolution in machine code eliminates both the overhead of compiling the code and the cost of parsing the program during evaluation. We obtained up to 2.74 trillion GP operations per second for the 20-bit Boolean Multiplexer benchmark. We also compared our approach with the other three GPU-based acceleration methodologies implemented for quantum-inspired linear GP. Significant gains in performance were obtained.

Cite

Text

da Silva et al. "Evolving GPU Machine Code." Journal of Machine Learning Research, 2015.

Markdown

[da Silva et al. "Evolving GPU Machine Code." Journal of Machine Learning Research, 2015.](https://mlanthology.org/jmlr/2015/dasilva2015jmlr-evolving/)

BibTeX

@article{dasilva2015jmlr-evolving,
  title     = {{Evolving GPU Machine Code}},
  author    = {da Silva, Cleomar Pereira and Dias, Douglas Mota and Bentes, Cristiana and Pacheco, Marco Aurélio Cavalcanti and Cupertino, Leandro Fontoura},
  journal   = {Journal of Machine Learning Research},
  year      = {2015},
  pages     = {673-712},
  volume    = {16},
  url       = {https://mlanthology.org/jmlr/2015/dasilva2015jmlr-evolving/}
}