Efficient Global Robustness Certification of Neural Networks via Interleaving Twin-Network Encoding (Extended Abstract)
Abstract
The robustness of deep neural networks in safety-critical systems has received significant interest recently, which measures how sensitive the model output is under input perturbations. While most previous works focused on the local robustness property, the studies of the global robustness property, i.e., the robustness in the entire input space, are still lacking. In this work, we formulate the global robustness certification problem for ReLU neural networks and present an efficient approach to address it. Our approach includes a novel interleaving twin-network encoding scheme and an over-approximation algorithm leveraging relaxation and refinement techniques. Its timing efficiency and effectiveness are evaluated and compared with other state-of-the-art global robustness certification methods, and demonstrated via case studies on practical applications.
Cite
Text
Wang et al. "Efficient Global Robustness Certification of Neural Networks via Interleaving Twin-Network Encoding (Extended Abstract)." International Joint Conference on Artificial Intelligence, 2023. doi:10.24963/IJCAI.2023/727Markdown
[Wang et al. "Efficient Global Robustness Certification of Neural Networks via Interleaving Twin-Network Encoding (Extended Abstract)." International Joint Conference on Artificial Intelligence, 2023.](https://mlanthology.org/ijcai/2023/wang2023ijcai-efficient/) doi:10.24963/IJCAI.2023/727BibTeX
@inproceedings{wang2023ijcai-efficient,
title = {{Efficient Global Robustness Certification of Neural Networks via Interleaving Twin-Network Encoding (Extended Abstract)}},
author = {Wang, Zhilu and Huang, Chao and Zhu, Qi},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2023},
pages = {6498-6503},
doi = {10.24963/IJCAI.2023/727},
url = {https://mlanthology.org/ijcai/2023/wang2023ijcai-efficient/}
}