Enumerating Safe Regions in Deep Neural Networks with Provable Probabilistic Guarantees
Abstract
Identifying safe areas is a key point to guarantee trust for systems that are based on Deep Neural Networks (DNNs). To this end, we introduce the AllDNN-Verification problem: given a safety property and a DNN, enumerate the set of all the regions of the property input domain which are safe, i.e., where the property does hold. Due to the #P-hardness of the problem, we propose an efficient approximation method called ε-ProVe. Our approach exploits a controllable underestimation of the output reachable sets obtained via statistical prediction of tolerance limits, and can provide a tight —with provable probabilistic guarantees— lower estimate of the safe areas. Our empirical evaluation on different standard benchmarks shows the scalability and effectiveness of our method, offering valuable insights for this new type of verification of DNNs.
Cite
Text
Marzari et al. "Enumerating Safe Regions in Deep Neural Networks with Provable Probabilistic Guarantees." AAAI Conference on Artificial Intelligence, 2024. doi:10.1609/AAAI.V38I19.30134Markdown
[Marzari et al. "Enumerating Safe Regions in Deep Neural Networks with Provable Probabilistic Guarantees." AAAI Conference on Artificial Intelligence, 2024.](https://mlanthology.org/aaai/2024/marzari2024aaai-enumerating/) doi:10.1609/AAAI.V38I19.30134BibTeX
@inproceedings{marzari2024aaai-enumerating,
title = {{Enumerating Safe Regions in Deep Neural Networks with Provable Probabilistic Guarantees}},
author = {Marzari, Luca and Corsi, Davide and Marchesini, Enrico and Farinelli, Alessandro and Cicalese, Ferdinando},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2024},
pages = {21387-21394},
doi = {10.1609/AAAI.V38I19.30134},
url = {https://mlanthology.org/aaai/2024/marzari2024aaai-enumerating/}
}