Optimizing Large Language Model Training Using FP4 Quantization
Abstract
The growing computational demands of training large language models (LLMs) necessitate more efficient methods. Quantized training presents a promising solution by enabling low-bit arithmetic operations to reduce these costs. While FP8 precision has demonstrated feasibility, leveraging FP4 remains a challenge due to significant quantization errors and limited representational capacity. This work introduces the first FP4 training framework for LLMs, addressing these challenges with two key innovations: a differentiable quantization estimator for precise weight updates and an outlier clamping and compensation strategy to prevent activation collapse. To ensure stability, the framework integrates a mixed-precision training scheme and vector-wise quantization. Experimental results demonstrate that our FP4 framework achieves accuracy comparable to BF16 and FP8, with minimal degradation, scaling effectively to 13B-parameter LLMs trained on up to 100B tokens. With the emergence of next-generation hardware supporting FP4, our framework sets a foundation for efficient ultra-low precision training.
Cite
Text
Wang et al. "Optimizing Large Language Model Training Using FP4 Quantization." Proceedings of the 42nd International Conference on Machine Learning, 2025.Markdown
[Wang et al. "Optimizing Large Language Model Training Using FP4 Quantization." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/wang2025icml-optimizing/)BibTeX
@inproceedings{wang2025icml-optimizing,
title = {{Optimizing Large Language Model Training Using FP4 Quantization}},
author = {Wang, Ruizhe and Gong, Yeyun and Liu, Xiao and Zhao, Guoshuai and Yang, Ziyue and Guo, Baining and Zha, Zheng-Jun and Cheng, Peng},
booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
year = {2025},
pages = {62937-62957},
volume = {267},
url = {https://mlanthology.org/icml/2025/wang2025icml-optimizing/}
}