DataElixir: Purifying Poisoned Dataset to Mitigate Backdoor Attacks via Diffusion Models

Cite

Text

Zhou et al. "DataElixir: Purifying Poisoned Dataset to Mitigate Backdoor Attacks via Diffusion Models." AAAI Conference on Artificial Intelligence, 2024. doi:10.1609/AAAI.V38I19.30186

Markdown

[Zhou et al. "DataElixir: Purifying Poisoned Dataset to Mitigate Backdoor Attacks via Diffusion Models." AAAI Conference on Artificial Intelligence, 2024.](https://mlanthology.org/aaai/2024/zhou2024aaai-dataelixir/) doi:10.1609/AAAI.V38I19.30186

BibTeX

@inproceedings{zhou2024aaai-dataelixir,
  title     = {{DataElixir: Purifying Poisoned Dataset to Mitigate Backdoor Attacks via Diffusion Models}},
  author    = {Zhou, Jiachen and Lv, Peizhuo and Lan, Yibing and Meng, Guozhu and Chen, Kai and Ma, Hualong},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2024},
  pages     = {21850-21858},
  doi       = {10.1609/AAAI.V38I19.30186},
  url       = {https://mlanthology.org/aaai/2024/zhou2024aaai-dataelixir/}
}