SCALM: Detecting Bad Practices in Smart Contracts Through LLMs
Abstract
As the Ethereum platform continues to mature and gain widespread usage, it is crucial to maintain high standards of smart contract writing practices. While bad practices in smart contracts may not directly lead to security issues, they do elevate the risk of encountering problems. Therefore, to understand and avoid these bad practices, this paper introduces the first systematic study of bad practices in smart contracts, delving into over 35 specific issues. Specifically, we propose a large language models (LLMs)-based framework, SCALM. It combines Step-Back Prompting and Retrieval-Augmented Generation (RAG) to effectively identify and address various bad practices. Our extensive experiments using multiple LLMs and datasets have shown that SCALM outperforms existing tools in detecting bad practices in smart contracts.
Cite
Text
Li et al. "SCALM: Detecting Bad Practices in Smart Contracts Through LLMs." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I1.32026Markdown
[Li et al. "SCALM: Detecting Bad Practices in Smart Contracts Through LLMs." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/li2025aaai-scalm/) doi:10.1609/AAAI.V39I1.32026BibTeX
@inproceedings{li2025aaai-scalm,
title = {{SCALM: Detecting Bad Practices in Smart Contracts Through LLMs}},
author = {Li, Zongwei and Li, Xiaoqi and Li, Wenkai and Wang, Xin},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2025},
pages = {470-477},
doi = {10.1609/AAAI.V39I1.32026},
url = {https://mlanthology.org/aaai/2025/li2025aaai-scalm/}
}