Probabilistically Tightened Linear Relaxation-Based Perturbation Analysis for Neural Network Verification

Abstract

We present Probabilistically Tightened Linear Relaxation-based Perturbation Analysis (PT-LiRPA), a novel framework that combines over-approximation techniques from LiRPA-based approaches with a sampling-based method to compute tight intermediate reachable sets. In detail, we show that with negligible computational overhead, PT-LiRPA exploiting the estimated reachable sets, significantly tightens the lower and upper linear bounds of a neural network's output, reducing the computational cost of formal verification tools while providing probabilistic guarantees on verification soundness. Extensive experiments on standard formal verification benchmarks, including the International Verification of Neural Networks Competition, show that our PT-LiRPA-based verifier improves robustness certificates, i.e., the certified lower bound of ε perturbation tolerated by the models, by up to 3.31X and 2.26X compared to related work. Importantly, our probabilistic approach results in a valuable solution for challenging competition entries where state-of-the-art formal verification methods fail, allowing us to provide answers with high confidence (i.e., at least 99%).

Cite

Text

Marzari et al. "Probabilistically Tightened Linear Relaxation-Based Perturbation Analysis for Neural Network Verification." Journal of Artificial Intelligence Research, 2025. doi:10.1613/JAIR.1.20808

Markdown

[Marzari et al. "Probabilistically Tightened Linear Relaxation-Based Perturbation Analysis for Neural Network Verification." Journal of Artificial Intelligence Research, 2025.](https://mlanthology.org/jair/2025/marzari2025jair-probabilistically/) doi:10.1613/JAIR.1.20808

BibTeX

@article{marzari2025jair-probabilistically,
  title     = {{Probabilistically Tightened Linear Relaxation-Based Perturbation Analysis for Neural Network Verification}},
  author    = {Marzari, Luca and Cicalese, Ferdinando and Farinelli, Alessandro},
  journal   = {Journal of Artificial Intelligence Research},
  year      = {2025},
  doi       = {10.1613/JAIR.1.20808},
  volume    = {84},
  url       = {https://mlanthology.org/jair/2025/marzari2025jair-probabilistically/}
}