Fine-Tuning Diffusion Models with Limited Data
Abstract
Diffusion models have recently shown remarkable progress, demonstrating state-of-the-art image generation qualities. Like the other high-fidelity generative models, diffusion models require a large amount of data and computing time for stable training, which hinders the application of diffusion models for limited data settings. To overcome this issue, one can employ a pre-trained diffusion model built on a large-scale dataset and fine-tune it on a target dataset. Unfortunately, as we show empirically, this easily results in overfitting. In this paper, we propose an efficient fine-tuning algorithm for diffusion models that can efficiently and robustly train on limited data settings. We first show that fine-tuning only the small subset of the pre-trained parameters can efficiently learn the target dataset with much less overfitting. Then we further introduce a lightweight adapter module that can be attached to the pre-trained model with minimal overhead and show that fine-tuning with our adapter module significantly improves the image generation quality. We demonstrate the effectiveness of our method on various real-world image datasets.
Cite
Text
Moon et al. "Fine-Tuning Diffusion Models with Limited Data." NeurIPS 2022 Workshops: SBM, 2022.Markdown
[Moon et al. "Fine-Tuning Diffusion Models with Limited Data." NeurIPS 2022 Workshops: SBM, 2022.](https://mlanthology.org/neuripsw/2022/moon2022neuripsw-finetuning/)BibTeX
@inproceedings{moon2022neuripsw-finetuning,
title = {{Fine-Tuning Diffusion Models with Limited Data}},
author = {Moon, Taehong and Choi, Moonseok and Lee, Gayoung and Ha, Jung-Woo and Lee, Juho},
booktitle = {NeurIPS 2022 Workshops: SBM},
year = {2022},
url = {https://mlanthology.org/neuripsw/2022/moon2022neuripsw-finetuning/}
}