Learning to Schedule Resistant to Adversarial Attacks in Diffusion Probabilistic Models Under the Threat of Lipschitz Singularities

Abstract

Recently, the field of generative models has advanced significantly with the introduction of Diffusion Probabilistic Models (DPMs). However, the discovery of Lipschitz Singularities within DPMs reveals a vulnerability to subtle adversarial attacks, particularly at timesteps close to zero. This paper introduces a novel approach to enhance the robustness of DPMs against adversarial attacks, specifically addressing the challenge posed by Lipschitz Singularities. By implementing a dynamic scheduling strategy of σ through Reinforcement Learning (RL), we mitigate the adverse effects stemming from adversarial attacks that exploit vulnerabilities linked to Lipschitz singularities. Experimental results demonstrate the effectiveness of our approach in maintaining high-quality image generation.

Cite

Text

Hong. "Learning to Schedule Resistant to Adversarial Attacks in Diffusion Probabilistic Models Under the Threat of Lipschitz Singularities." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2024. doi:10.1109/CVPRW63382.2024.00301

Markdown

[Hong. "Learning to Schedule Resistant to Adversarial Attacks in Diffusion Probabilistic Models Under the Threat of Lipschitz Singularities." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2024.](https://mlanthology.org/cvprw/2024/hong2024cvprw-learning/) doi:10.1109/CVPRW63382.2024.00301

BibTeX

@inproceedings{hong2024cvprw-learning,
  title     = {{Learning to Schedule Resistant to Adversarial Attacks in Diffusion Probabilistic Models Under the Threat of Lipschitz Singularities}},
  author    = {Hong, SangHwa},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2024},
  pages     = {2957-2966},
  doi       = {10.1109/CVPRW63382.2024.00301},
  url       = {https://mlanthology.org/cvprw/2024/hong2024cvprw-learning/}
}