Beyond the Single Neuron Convex Barrier for Neural Network Certification

Abstract

We propose a new parametric framework, called k-ReLU, for computing precise and scalable convex relaxations used to certify neural networks. The key idea is to approximate the output of multiple ReLUs in a layer jointly instead of separately. This joint relaxation captures dependencies between the inputs to different ReLUs in a layer and thus overcomes the convex barrier imposed by the single neuron triangle relaxation and its approximations. The framework is parametric in the number of k ReLUs it considers jointly and can be combined with existing verifiers in order to improve their precision. Our experimental results show that k-ReLU en- ables significantly more precise certification than existing state-of-the-art verifiers while maintaining scalability.

Cite

Text

Singh et al. "Beyond the Single Neuron Convex Barrier for Neural Network Certification." Neural Information Processing Systems, 2019.

Markdown

[Singh et al. "Beyond the Single Neuron Convex Barrier for Neural Network Certification." Neural Information Processing Systems, 2019.](https://mlanthology.org/neurips/2019/singh2019neurips-beyond/)

BibTeX

@inproceedings{singh2019neurips-beyond,
  title     = {{Beyond the Single Neuron Convex Barrier for Neural Network Certification}},
  author    = {Singh, Gagandeep and Ganvir, Rupanshu and Püschel, Markus and Vechev, Martin},
  booktitle = {Neural Information Processing Systems},
  year      = {2019},
  pages     = {15098-15109},
  url       = {https://mlanthology.org/neurips/2019/singh2019neurips-beyond/}
}