Summarization Attack via Paraphrasing (Student Abstract)
Abstract
Many natural language processing models are perceived to be fragile on adversarial attacks. Recent work on adversarial attack has demonstrated a high success rate on sentiment analysis as well as classification models. However, attacks to summarization models have not been well studied. Summarization tasks are rarely influenced by word substitution, since advanced abstractive summary models utilize sentence level information. In this paper, we propose a paraphrasing-based attack method to attack summarization models. We first rank the sentences in the document according to their impacts to summarization. Then, we apply paraphrasing procedure to generate adversarial samples. Finally, we test our algorithm on benchmarks datasets against others methods. Our approach achieved the highest success rate and the lowest sentence substitution rate. In addition, the adversarial samples have high semantic similarity with the original sentences.
Cite
Text
Li and Liu. "Summarization Attack via Paraphrasing (Student Abstract)." AAAI Conference on Artificial Intelligence, 2023. doi:10.1609/AAAI.V37I13.26985Markdown
[Li and Liu. "Summarization Attack via Paraphrasing (Student Abstract)." AAAI Conference on Artificial Intelligence, 2023.](https://mlanthology.org/aaai/2023/li2023aaai-summarization/) doi:10.1609/AAAI.V37I13.26985BibTeX
@inproceedings{li2023aaai-summarization,
title = {{Summarization Attack via Paraphrasing (Student Abstract)}},
author = {Li, Jiyao and Liu, Wei},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2023},
pages = {16250-16251},
doi = {10.1609/AAAI.V37I13.26985},
url = {https://mlanthology.org/aaai/2023/li2023aaai-summarization/}
}