Efficient Statistical Assessment of Neural Network Corruption Robustness
Abstract
We quantify the robustness of a trained network to input uncertainties with a stochastic simulation inspired by the field of Statistical Reliability Engineering. The robustness assessment is cast as a statistical hypothesis test: the network is deemed as locally robust if the estimated probability of failure is lower than a critical level.The procedure is based on an Importance Splitting simulation generating samples of rare events. We derive theoretical guarantees that are non-asymptotic w.r.t. sample size. Experiments tackling large scale networks outline the efficiency of our method making a low number of calls to the network function.
Cite
Text
Tit et al. "Efficient Statistical Assessment of Neural Network Corruption Robustness." Neural Information Processing Systems, 2021.Markdown
[Tit et al. "Efficient Statistical Assessment of Neural Network Corruption Robustness." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/tit2021neurips-efficient/)BibTeX
@inproceedings{tit2021neurips-efficient,
title = {{Efficient Statistical Assessment of Neural Network Corruption Robustness}},
author = {Tit, Karim and Furon, Teddy and Rousset, Mathias},
booktitle = {Neural Information Processing Systems},
year = {2021},
url = {https://mlanthology.org/neurips/2021/tit2021neurips-efficient/}
}