Weighted-Sampling Audio Adversarial Example Attack
Abstract
Recent studies have highlighted audio adversarial examples as a ubiquitous threat to state-of-the-art automatic speech recognition systems. Thorough studies on how to effectively generate adversarial examples are essential to prevent potential attacks. Despite many research on this, the efficiency and the robustness of existing works are not yet satisfactory. In this paper, we propose weighted-sampling audio adversarial examples, focusing on the numbers and the weights of distortion to reinforce the attack. Further, we apply a denoising method in the loss function to make the adversarial attack more imperceptible. Experiments show that our method is the first in the field to generate audio adversarial examples with low noise and high audio robustness at the minute time-consuming level 1.
Cite
Text
Liu et al. "Weighted-Sampling Audio Adversarial Example Attack." AAAI Conference on Artificial Intelligence, 2020. doi:10.1609/AAAI.V34I04.5928Markdown
[Liu et al. "Weighted-Sampling Audio Adversarial Example Attack." AAAI Conference on Artificial Intelligence, 2020.](https://mlanthology.org/aaai/2020/liu2020aaai-weighted/) doi:10.1609/AAAI.V34I04.5928BibTeX
@inproceedings{liu2020aaai-weighted,
title = {{Weighted-Sampling Audio Adversarial Example Attack}},
author = {Liu, Xiaolei and Wan, Kun and Ding, Yufei and Zhang, Xiaosong and Zhu, Qingxin},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2020},
pages = {4908-4915},
doi = {10.1609/AAAI.V34I04.5928},
url = {https://mlanthology.org/aaai/2020/liu2020aaai-weighted/}
}