Backdoor Token Unlearning: Exposing and Defending Backdoors in Pretrained Language Models

Cite

Text

Jiang et al. "Backdoor Token Unlearning: Exposing and Defending Backdoors in Pretrained Language Models." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I23.34605

Markdown

[Jiang et al. "Backdoor Token Unlearning: Exposing and Defending Backdoors in Pretrained Language Models." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/jiang2025aaai-backdoor/) doi:10.1609/AAAI.V39I23.34605

BibTeX

@inproceedings{jiang2025aaai-backdoor,
  title     = {{Backdoor Token Unlearning: Exposing and Defending Backdoors in Pretrained Language Models}},
  author    = {Jiang, Peihai and Lyu, Xixiang and Li, Yige and Ma, Jing},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2025},
  pages     = {24285-24293},
  doi       = {10.1609/AAAI.V39I23.34605},
  url       = {https://mlanthology.org/aaai/2025/jiang2025aaai-backdoor/}
}