Is BERT Really Robust? a Strong Baseline for Natural Language Attack on Text Classification and Entailment
Abstract
Machine learning algorithms are often vulnerable to adversarial examples that have imperceptible alterations from the original counterparts but can fool the state-of-the-art models. It is helpful to evaluate or even improve the robustness of these models by exposing the maliciously crafted adversarial examples. In this paper, we present TextFooler, a simple but strong baseline to generate adversarial text. By applying it to two fundamental natural language tasks, text classification and textual entailment, we successfully attacked three target models, including the powerful pre-trained BERT, and the widely used convolutional and recurrent neural networks. We demonstrate three advantages of this framework: (1) effective—it outperforms previous attacks by success rate and perturbation rate, (2) utility-preserving—it preserves semantic content, grammaticality, and correct types classified by humans, and (3) efficient—it generates adversarial text with computational complexity linear to the text length.1
Cite
Text
Jin et al. "Is BERT Really Robust? a Strong Baseline for Natural Language Attack on Text Classification and Entailment." AAAI Conference on Artificial Intelligence, 2020. doi:10.1609/AAAI.V34I05.6311Markdown
[Jin et al. "Is BERT Really Robust? a Strong Baseline for Natural Language Attack on Text Classification and Entailment." AAAI Conference on Artificial Intelligence, 2020.](https://mlanthology.org/aaai/2020/jin2020aaai-bert/) doi:10.1609/AAAI.V34I05.6311BibTeX
@inproceedings{jin2020aaai-bert,
title = {{Is BERT Really Robust? a Strong Baseline for Natural Language Attack on Text Classification and Entailment}},
author = {Jin, Di and Jin, Zhijing and Zhou, Joey Tianyi and Szolovits, Peter},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2020},
pages = {8018-8025},
doi = {10.1609/AAAI.V34I05.6311},
url = {https://mlanthology.org/aaai/2020/jin2020aaai-bert/}
}