A Manifold View of Adversarial Risk
Abstract
The adversarial risk of a machine learning model has been widely studied. Most previous works assume that the data lies in the whole ambient space. We propose to take a new angle and take the manifold assumption into consideration. Assuming data lies in a manifold, we investigate two new types of adversarial risk, the normal adversarial risk due to perturbation along normal direction, and the in-manifold adversarial risk due to perturbation within the manifold. We prove that the classic adversarial risk can be bounded from both sides using the normal and in-manifold adversarial risks. We also show with a surprisingly pessimistic case that the standard adversarial risk can be nonzero even when both normal and in-manifold risks are zero. We finalize the paper with empirical studies supporting our theoretical results. Our results suggest the possibility of improving the robustness of a classifier by only focusing on the normal adversarial risk.
Cite
Text
Zhang et al. "A Manifold View of Adversarial Risk." Artificial Intelligence and Statistics, 2022.Markdown
[Zhang et al. "A Manifold View of Adversarial Risk." Artificial Intelligence and Statistics, 2022.](https://mlanthology.org/aistats/2022/zhang2022aistats-manifold/)BibTeX
@inproceedings{zhang2022aistats-manifold,
title = {{A Manifold View of Adversarial Risk}},
author = {Zhang, Wenjia and Zhang, Yikai and Hu, Xiaoling and Goswami, Mayank and Chen, Chao and Metaxas, Dimitris N.},
booktitle = {Artificial Intelligence and Statistics},
year = {2022},
pages = {11598-11614},
volume = {151},
url = {https://mlanthology.org/aistats/2022/zhang2022aistats-manifold/}
}