Tendiffpure: Tensorizing Diffusion Models for Purification
Abstract
Diffusion models are effective purification methods to purify the noised or adversarially perturbed examples before feeding them into classifiers. One major limitation of existing diffusion models for purification is low efficiency. Current solutions are knowledge distillation which in fact jeopardizes the generation quality, i.e., the purification performance, because of the small number of generation steps. We propose Tendiffpure as a compressed diffusion model for purification via tensorization. Unlike knowledge distillation methods, we keep the number of generation steps unchanged and directly compress u-nets, the backbones of diffusion models, using tensor-train decomposition, which reduces the number of parameters and captures more spatial information in multi-dimensional data such as images. The space complexity is reduced from $\mathit{O}(N^2)$ to $\mathit{O}(NR^2)$ with $R\leq 4$. Experimental results show that Tendiffpure can generate high quality purified results more efficiently and outperform the baseline purification methods on CIFAR-10, FashionMNIST and MNIST datasets for two noises and one adversarial attack.
Cite
Text
Zhou et al. "Tendiffpure: Tensorizing Diffusion Models for Purification." ICML 2023 Workshops: Frontiers4LCD, 2023.Markdown
[Zhou et al. "Tendiffpure: Tensorizing Diffusion Models for Purification." ICML 2023 Workshops: Frontiers4LCD, 2023.](https://mlanthology.org/icmlw/2023/zhou2023icmlw-tendiffpure/)BibTeX
@inproceedings{zhou2023icmlw-tendiffpure,
title = {{Tendiffpure: Tensorizing Diffusion Models for Purification}},
author = {Zhou, Derun and Bai, Mingyuan and Zhao, Qibin},
booktitle = {ICML 2023 Workshops: Frontiers4LCD},
year = {2023},
url = {https://mlanthology.org/icmlw/2023/zhou2023icmlw-tendiffpure/}
}