On Memorization in Diffusion Models
Abstract
Due to their capacity to generate novel and high-quality samples, diffusion models have attracted significant research interest in recent years. Notably, the typical training objective of diffusion models, i.e., denoising score matching, has a closed-form optimal solution that can only generate training-data replicating samples. This indicates that a memorization behavior is theoretically expected, which contradicts the common generalization ability of state-of-the-art diffusion models, and thus calls for a deeper understanding. Looking into this, we first observe that memorization behaviors tend to occur on smaller-sized datasets, which motivates our definition of effective model memorization (EMM), a metric measuring the maximum size of training data at which a model approximates its theoretical optimum. Then, we quantify the impact of the influential factors on these memorization behaviors in terms of EMM, focusing primarily on data distribution, model configuration, and training procedure. Besides comprehensive empirical results identifying the influential factors, we surprisingly find that conditioning training data on uninformative random labels can significantly trigger the memorization in diffusion models. Our study holds practical significance for diffusion model users and offers clues to theoretical research in deep generative models.
Cite
Text
Gu et al. "On Memorization in Diffusion Models." Transactions on Machine Learning Research, 2025.Markdown
[Gu et al. "On Memorization in Diffusion Models." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/gu2025tmlr-memorization/)BibTeX
@article{gu2025tmlr-memorization,
title = {{On Memorization in Diffusion Models}},
author = {Gu, Xiangming and Du, Chao and Pang, Tianyu and Li, Chongxuan and Lin, Min and Wang, Ye},
journal = {Transactions on Machine Learning Research},
year = {2025},
url = {https://mlanthology.org/tmlr/2025/gu2025tmlr-memorization/}
}