Slight Corruption in Pre-Training Data Makes Better Diffusion Models
Abstract
Diffusion models (DMs) have shown remarkable capabilities in generating realistic high-quality images, audios, and videos. They benefit significantly from extensive pre-training on large-scale datasets, including web-crawled data with paired data and conditions, such as image-text and image-class pairs.Despite rigorous filtering, these pre-training datasets often inevitably contain corrupted pairs where conditions do not accurately describe the data. This paper presents the first comprehensive study on the impact of such corruption in pre-training data of DMs.We synthetically corrupt ImageNet-1K and CC3M to pre-train and evaluate over $50$ conditional DMs. Our empirical findings reveal that various types of slight corruption in pre-training can significantly enhance the quality, diversity, and fidelity of the generated images across different DMs, both during pre-training and downstream adaptation stages. Theoretically, we consider a Gaussian mixture model and prove that slight corruption in the condition leads to higher entropy and a reduced 2-Wasserstein distance to the ground truth of the data distribution generated by the corruptly trained DMs.Inspired by our analysis, we propose a simple method to improve the training of DMs on practical datasets by adding condition embedding perturbations (CEP).CEP significantly improves the performance of various DMs in both pre-training and downstream tasks.We hope that our study provides new insights into understanding the data and pre-training processes of DMs.
Cite
Text
Chen et al. "Slight Corruption in Pre-Training Data Makes Better Diffusion Models." Neural Information Processing Systems, 2024. doi:10.52202/079017-4008Markdown
[Chen et al. "Slight Corruption in Pre-Training Data Makes Better Diffusion Models." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/chen2024neurips-slight/) doi:10.52202/079017-4008BibTeX
@inproceedings{chen2024neurips-slight,
title = {{Slight Corruption in Pre-Training Data Makes Better Diffusion Models}},
author = {Chen, Hao and Han, Yujin and Misra, Diganta and Li, Xiang and Hu, Kai and Zou, Difan and Sugiyama, Masashi and Wang, Jindong and Raj, Bhiksha},
booktitle = {Neural Information Processing Systems},
year = {2024},
doi = {10.52202/079017-4008},
url = {https://mlanthology.org/neurips/2024/chen2024neurips-slight/}
}