Efficient Neural Network Verification via Layer-Based Semidefinite Relaxations and Linear Cuts

Abstract

We introduce an efficient and tight layer-based semidefinite relaxation for verifying local robustness of neural networks. The improved tightness is the result of the combination between semidefinite relaxations and linear cuts. We obtain a computationally efficient method by decomposing the semidefinite formulation into layerwise constraints. By leveraging on chordal graph decompositions, we show that the formulation here presented is provably tighter than current approaches. Experiments on a set of benchmark networks show that the approach here proposed enables the verification of more instances compared to other relaxation methods. The results also demonstrate that the SDP relaxation here proposed is one order of magnitude faster than previous SDP methods.

Cite

Text

Batten et al. "Efficient Neural Network Verification via Layer-Based Semidefinite Relaxations and Linear Cuts." International Joint Conference on Artificial Intelligence, 2021. doi:10.24963/IJCAI.2021/301

Markdown

[Batten et al. "Efficient Neural Network Verification via Layer-Based Semidefinite Relaxations and Linear Cuts." International Joint Conference on Artificial Intelligence, 2021.](https://mlanthology.org/ijcai/2021/batten2021ijcai-efficient/) doi:10.24963/IJCAI.2021/301

BibTeX

@inproceedings{batten2021ijcai-efficient,
  title     = {{Efficient Neural Network Verification via Layer-Based Semidefinite Relaxations and Linear Cuts}},
  author    = {Batten, Ben and Kouvaros, Panagiotis and Lomuscio, Alessio and Zheng, Yang},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2021},
  pages     = {2184-2190},
  doi       = {10.24963/IJCAI.2021/301},
  url       = {https://mlanthology.org/ijcai/2021/batten2021ijcai-efficient/}
}