PROVEN: Verifying Robustness of Neural Networks with a Probabilistic Approach
Abstract
We propose a novel framework PROVEN to \textbf{PRO}babilistically \textbf{VE}rify \textbf{N}eural network’s robustness with statistical guarantees. PROVEN provides probability certificates of neural network robustness when the input perturbation follow distributional characterization. Notably, PROVEN is derived from current state-of-the-art worst-case neural network robustness verification frameworks, and therefore it can provide probability certificates with little computational overhead on top of existing methods such as Fast-Lin, CROWN and CNN-Cert. Experiments on small and large MNIST and CIFAR neural network models demonstrate our probabilistic approach can tighten up robustness certificate to around $1.8 \times$ and $3.5 \times$ with at least a $99.99%$ confidence compared with the worst-case robustness certificate by CROWN and CNN-Cert.
Cite
Text
Weng et al. "PROVEN: Verifying Robustness of Neural Networks with a Probabilistic Approach." International Conference on Machine Learning, 2019.Markdown
[Weng et al. "PROVEN: Verifying Robustness of Neural Networks with a Probabilistic Approach." International Conference on Machine Learning, 2019.](https://mlanthology.org/icml/2019/weng2019icml-proven/)BibTeX
@inproceedings{weng2019icml-proven,
title = {{PROVEN: Verifying Robustness of Neural Networks with a Probabilistic Approach}},
author = {Weng, Lily and Chen, Pin-Yu and Nguyen, Lam and Squillante, Mark and Boopathy, Akhilan and Oseledets, Ivan and Daniel, Luca},
booktitle = {International Conference on Machine Learning},
year = {2019},
pages = {6727-6736},
volume = {97},
url = {https://mlanthology.org/icml/2019/weng2019icml-proven/}
}