Exploring the Trade-Offs: Quantization Methods, Task Difficulty, and Model Size in Large Language Models from Edge to Giant
Abstract
Quantization has gained attention as a promising solution for the cost-effective deployment of large and small language models. However, most prior work has been limited to perplexity or basic knowledge tasks and lacks a comprehensive evaluation of recent models like Llama-3.3. In this paper, we conduct a comprehensive evaluation of instruction-tuned models spanning 1B to 405B parameters, applying four quantization methods across 13 datasets. Our findings reveal that (1) quantized models generally surpass smaller FP16 baselines, yet they often struggle with instruction-following and hallucination detection; (2) FP8 consistently emerges as the most robust option across tasks, and AWQ tends to outperform GPTQ in weight-only quantization; (3) smaller models can suffer severe accuracy drops at 4-bit quantization, while 70B-scale models maintain stable performance; (4) notably, \textit{hard} tasks do not always experience the largest accuracy losses, indicating that quantization magnifies a model’s inherent weaknesses rather than simply correlating with task difficulty; and (5) an LLM-based judge (MT-Bench) highlights significant performance declines in Coding and STEM tasks, though it occasionally reports improvements in reasoning.
Cite
Text
Lee et al. "Exploring the Trade-Offs: Quantization Methods, Task Difficulty, and Model Size in Large Language Models from Edge to Giant." International Joint Conference on Artificial Intelligence, 2025. doi:10.24963/IJCAI.2025/902Markdown
[Lee et al. "Exploring the Trade-Offs: Quantization Methods, Task Difficulty, and Model Size in Large Language Models from Edge to Giant." International Joint Conference on Artificial Intelligence, 2025.](https://mlanthology.org/ijcai/2025/lee2025ijcai-exploring/) doi:10.24963/IJCAI.2025/902BibTeX
@inproceedings{lee2025ijcai-exploring,
title = {{Exploring the Trade-Offs: Quantization Methods, Task Difficulty, and Model Size in Large Language Models from Edge to Giant}},
author = {Lee, Jemin and Park, Sihyeong and Kwon, Jinse and Oh, Jihun and Kwon, Yongin},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2025},
pages = {8113-8121},
doi = {10.24963/IJCAI.2025/902},
url = {https://mlanthology.org/ijcai/2025/lee2025ijcai-exploring/}
}