Optimizing Memory Placement Using Evolutionary Graph Reinforcement Learning

Abstract

For deep neural network accelerators, memory movement is both energetically expensive and can bound computation. Therefore, optimal mapping of tensors to memory hierarchies is critical to performance. The growing complexity of neural networks calls for automated memory mapping instead of manual heuristic approaches; yet the search space of neural network computational graphs have previously been prohibitively large. We introduce Evolutionary Graph Reinforcement Learning (EGRL), a method designed for large search spaces, that combines graph neural networks, reinforcement learning, and evolutionary search. A set of fast, stateless policies guide the evolutionary search to improve its sample-efficiency. We train and validate our approach directly on the Intel NNP-I chip for inference. EGRL outperforms policy-gradient, evolutionary search and dynamic programming baselines on BERT, ResNet-101 and ResNet-50. We additionally achieve 28-78% speed-up compared to the native NNP-I compiler on all three workloads.

Cite

Text

Khadka et al. "Optimizing Memory Placement Using Evolutionary Graph Reinforcement Learning." International Conference on Learning Representations, 2021.

Markdown

[Khadka et al. "Optimizing Memory Placement Using Evolutionary Graph Reinforcement Learning." International Conference on Learning Representations, 2021.](https://mlanthology.org/iclr/2021/khadka2021iclr-optimizing/)

BibTeX

@inproceedings{khadka2021iclr-optimizing,
  title     = {{Optimizing Memory Placement Using Evolutionary Graph Reinforcement Learning}},
  author    = {Khadka, Shauharda and Aflalo, Estelle and Marder, Mattias and Ben-David, Avrech and Miret, Santiago and Mannor, Shie and Hazan, Tamir and Tang, Hanlin and Majumdar, Somdeb},
  booktitle = {International Conference on Learning Representations},
  year      = {2021},
  url       = {https://mlanthology.org/iclr/2021/khadka2021iclr-optimizing/}
}