Neuro-Symbolic Verification of Deep Neural Networks

Abstract

Formal verification has emerged as a powerful approach to ensure the safety and reliability of deep neural networks. However, current verification tools are limited to only a handful of properties that can be expressed as first-order constraints over the inputs and output of a network. While adversarial robustness and fairness fall under this category, many real-world properties (e.g., "an autonomous vehicle has to stop in front of a stop sign") remain outside the scope of existing verification technology. To mitigate this severe practical restriction, we introduce a novel framework for verifying neural networks, named neuro-symbolic verification. The key idea is to use neural networks as part of the otherwise logical specification, enabling the verification of a wide variety of complex, real-world properties, including the one above. A defining feature of our framework is that it can be implemented on top of existing verification infrastructure for neural networks, making it easily accessible to researchers and practitioners.

Cite

Text

Xie et al. "Neuro-Symbolic Verification of Deep Neural Networks." International Joint Conference on Artificial Intelligence, 2022. doi:10.24963/IJCAI.2022/503

Markdown

[Xie et al. "Neuro-Symbolic Verification of Deep Neural Networks." International Joint Conference on Artificial Intelligence, 2022.](https://mlanthology.org/ijcai/2022/xie2022ijcai-neuro/) doi:10.24963/IJCAI.2022/503

BibTeX

@inproceedings{xie2022ijcai-neuro,
  title     = {{Neuro-Symbolic Verification of Deep Neural Networks}},
  author    = {Xie, Xuan and Kersting, Kristian and Neider, Daniel},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2022},
  pages     = {3622-3628},
  doi       = {10.24963/IJCAI.2022/503},
  url       = {https://mlanthology.org/ijcai/2022/xie2022ijcai-neuro/}
}