Adversarial Robustness of Open-Set Recognition: Face Recognition and Person Re-Identification
Abstract
Recent studies show that DNNs are vulnerable to adversarial attacks, in which carefully chosen imperceptible modifications to the inputs lead to incorrect predictions. However most existing attacks focus on closed-set classification, and adversarial attack of open-set recognition has been less investigated. In this paper, we systematically investigate the adversarial robustness of widely used open-set recognition models, namely person re-identification (ReID) and face recognition (FR) models. Specifically, we compare two categories of black-box attacks: transfer-based extensions of standard closed-set attacks and several direct random-search based attacks proposed here. Extensive experiments demonstrate that ReID and FR models are also vulnerable to adversarial attack, and highlight a potential AI trustworthiness problem for these socially important applications.
Cite
Text
Gong et al. "Adversarial Robustness of Open-Set Recognition: Face Recognition and Person Re-Identification." European Conference on Computer Vision Workshops, 2020. doi:10.1007/978-3-030-66415-2_9Markdown
[Gong et al. "Adversarial Robustness of Open-Set Recognition: Face Recognition and Person Re-Identification." European Conference on Computer Vision Workshops, 2020.](https://mlanthology.org/eccvw/2020/gong2020eccvw-adversarial/) doi:10.1007/978-3-030-66415-2_9BibTeX
@inproceedings{gong2020eccvw-adversarial,
title = {{Adversarial Robustness of Open-Set Recognition: Face Recognition and Person Re-Identification}},
author = {Gong, Xiao and Hu, Guosheng and Hospedales, Timothy M. and Yang, Yongxin},
booktitle = {European Conference on Computer Vision Workshops},
year = {2020},
pages = {135-151},
doi = {10.1007/978-3-030-66415-2_9},
url = {https://mlanthology.org/eccvw/2020/gong2020eccvw-adversarial/}
}