Llama-NAS: Efficient Neural Architecture Search for Large Language Models

Abstract

The abilities of modern large language models (LLMs) in solving natural language processing, complex reasoning, sentiment analysis and other tasks have been extraordinary which has prompted their extensive adoption. Unfortunately, these abilities come with very high memory and computational costs which precludes the use of LLMs on most hardware platforms. To mitigate this, we propose an effective method of finding Pareto-optimal network architectures using one-shot NAS. In particular, we fine-tune LLaMA2-7B only once and then apply genetic algorithm-based search to find smaller, less computationally complex network architectures. More specifically, we demonstrate a 1.5x reduction in model size and 1.3x speedup in throughput for certain tasks with negligible drop in accuracy. In addition to finding smaller, higher-performing network architectures, our method does so more effectively and efficiently than certain pruning or sparsification techniques.

Cite

Text

Sarah et al. "Llama-NAS: Efficient Neural Architecture Search for Large Language Models." European Conference on Computer Vision Workshops, 2024. doi:10.1007/978-3-031-91979-4_7

Markdown

[Sarah et al. "Llama-NAS: Efficient Neural Architecture Search for Large Language Models." European Conference on Computer Vision Workshops, 2024.](https://mlanthology.org/eccvw/2024/sarah2024eccvw-llamanas/) doi:10.1007/978-3-031-91979-4_7

BibTeX

@inproceedings{sarah2024eccvw-llamanas,
  title     = {{Llama-NAS: Efficient Neural Architecture Search for Large Language Models}},
  author    = {Sarah, Anthony and Sridhar, Sharath Nittur and Szankin, Maciej and Sundaresan, Sairam},
  booktitle = {European Conference on Computer Vision Workshops},
  year      = {2024},
  pages     = {67-74},
  doi       = {10.1007/978-3-031-91979-4_7},
  url       = {https://mlanthology.org/eccvw/2024/sarah2024eccvw-llamanas/}
}