LLMCarbon: Modeling the End-to-End Carbon Footprint of Large Language Models
Abstract
The carbon footprint associated with large language models (LLMs) is a significant concern, encompassing emissions from their training, inference, experimentation, and storage processes, including operational and embodied carbon emissions. An essential aspect is accurately estimating the carbon impact of emerging LLMs even before their training, which heavily relies on GPU usage. Existing studies have reported the carbon footprint of LLM training, but only one tool, mlco2, can predict the carbon footprint of new neural networks prior to physical training. However, mlco2 has several serious limitations. It cannot extend its estimation to dense or mixture-of-experts (MoE) LLMs, disregards critical architectural parameters, focuses solely on GPUs, and cannot model embodied carbon footprints. Addressing these gaps, we introduce \textit{\carb}, an end-to-end carbon footprint projection model designed for both dense and MoE LLMs. Compared to mlco2, \carb~significantly enhances the accuracy of carbon footprint estimations for various LLMs. The source code is released at \url{https://github.com/SotaroKaneda/MLCarbon}.
Cite
Text
Faiz et al. "LLMCarbon: Modeling the End-to-End Carbon Footprint of Large Language Models." International Conference on Learning Representations, 2024.Markdown
[Faiz et al. "LLMCarbon: Modeling the End-to-End Carbon Footprint of Large Language Models." International Conference on Learning Representations, 2024.](https://mlanthology.org/iclr/2024/faiz2024iclr-llmcarbon/)BibTeX
@inproceedings{faiz2024iclr-llmcarbon,
title = {{LLMCarbon: Modeling the End-to-End Carbon Footprint of Large Language Models}},
author = {Faiz, Ahmad and Kaneda, Sotaro and Wang, Ruhan and Osi, Rita Chukwunyere and Sharma, Prateek and Chen, Fan and Jiang, Lei},
booktitle = {International Conference on Learning Representations},
year = {2024},
url = {https://mlanthology.org/iclr/2024/faiz2024iclr-llmcarbon/}
}