DEEPSPLIT: An Efficient Splitting Method for Neural Network Verification via Indirect Effect Analysis
Abstract
We propose a novel, complete algorithm for the verification and analysis of feed-forward, ReLU-based neural networks. The algorithm, based on symbolic interval propagation, introduces a new method for determining split-nodes which evaluates the indirect effect that splitting has on the relaxations of successor nodes. We combine this with a new efficient linear-programming encoding of the splitting constraints to further improve the algorithm’s performance. The resulting implementation, DeepSplit, achieved speedups of 1–2 orders of magnitude and 21-34% fewer timeouts when compared to the current SoA toolkits.
Cite
Text
Henriksen and Lomuscio. "DEEPSPLIT: An Efficient Splitting Method for Neural Network Verification via Indirect Effect Analysis." International Joint Conference on Artificial Intelligence, 2021. doi:10.24963/IJCAI.2021/351Markdown
[Henriksen and Lomuscio. "DEEPSPLIT: An Efficient Splitting Method for Neural Network Verification via Indirect Effect Analysis." International Joint Conference on Artificial Intelligence, 2021.](https://mlanthology.org/ijcai/2021/henriksen2021ijcai-deepsplit/) doi:10.24963/IJCAI.2021/351BibTeX
@inproceedings{henriksen2021ijcai-deepsplit,
title = {{DEEPSPLIT: An Efficient Splitting Method for Neural Network Verification via Indirect Effect Analysis}},
author = {Henriksen, Patrick and Lomuscio, Alessio},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2021},
pages = {2549-2555},
doi = {10.24963/IJCAI.2021/351},
url = {https://mlanthology.org/ijcai/2021/henriksen2021ijcai-deepsplit/}
}