Tight Neural Network Verification via Semidefinite Relaxations and Linear Reformulations

Abstract

We present a novel semidefinite programming (SDP) relaxation that enables tight and efficient verification of neural networks. The tightness is achieved by combining SDP relaxations with valid linear cuts, constructed by using the reformulation-linearisation technique (RLT). The computational efficiency results from a layerwise SDP formulation and an iterative algorithm for incrementally adding RLT-generated linear cuts to the verification formulation. The layer RLT-SDP relaxation here presented is shown to produce the tightest SDP relaxation for ReLU neural networks available in the literature. We report experimental results based on MNIST neural networks showing that the method outperforms the state-of-the-art methods while maintaining acceptable computational overheads. For networks of approximately 10k nodes (1k, respectively), the proposed method achieved an improvement in the ratio of certified robustness cases from 0% to 82% (from 35% to 70%, respectively).

Cite

Text

Lan et al. "Tight Neural Network Verification via Semidefinite Relaxations and Linear Reformulations." AAAI Conference on Artificial Intelligence, 2022. doi:10.1609/AAAI.V36I7.20689

Markdown

[Lan et al. "Tight Neural Network Verification via Semidefinite Relaxations and Linear Reformulations." AAAI Conference on Artificial Intelligence, 2022.](https://mlanthology.org/aaai/2022/lan2022aaai-tight/) doi:10.1609/AAAI.V36I7.20689

BibTeX

@inproceedings{lan2022aaai-tight,
  title     = {{Tight Neural Network Verification via Semidefinite Relaxations and Linear Reformulations}},
  author    = {Lan, Jianglin and Zheng, Yang and Lomuscio, Alessio},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2022},
  pages     = {7272-7280},
  doi       = {10.1609/AAAI.V36I7.20689},
  url       = {https://mlanthology.org/aaai/2022/lan2022aaai-tight/}
}