Generating High Fidelity Data from Low-Density Regions Using Diffusion Models

Abstract

Our work focuses on addressing sample deficiency from low-density regions of data manifold in common image datasets. We leverage diffusion process based generative models to synthesize novel images from low-density regions. We observe that uniform sampling from diffusion models predominantly samples from high-density regions of the data manifold. Therefore, we modify the sampling process to guide it towards low-density regions while simultaneously maintaining the fidelity of synthetic data. We rigorously demonstrate that our process successfully generates novel high fidelity samples from low-density regions. We further examine generated samples and show that the model does not memorize low-density data and indeed learns to generate novel samples from low-density regions.

Cite

Text

Sehwag et al. "Generating High Fidelity Data from Low-Density Regions Using Diffusion Models." Conference on Computer Vision and Pattern Recognition, 2022. doi:10.1109/CVPR52688.2022.01120

Markdown

[Sehwag et al. "Generating High Fidelity Data from Low-Density Regions Using Diffusion Models." Conference on Computer Vision and Pattern Recognition, 2022.](https://mlanthology.org/cvpr/2022/sehwag2022cvpr-generating/) doi:10.1109/CVPR52688.2022.01120

BibTeX

@inproceedings{sehwag2022cvpr-generating,
  title     = {{Generating High Fidelity Data from Low-Density Regions Using Diffusion Models}},
  author    = {Sehwag, Vikash and Hazirbas, Caner and Gordo, Albert and Ozgenel, Firat and Canton, Cristian},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2022},
  pages     = {11492-11501},
  doi       = {10.1109/CVPR52688.2022.01120},
  url       = {https://mlanthology.org/cvpr/2022/sehwag2022cvpr-generating/}
}