OVERT: An Algorithm for Safety Verification of Neural Network Control Policies for Nonlinear Systems
Abstract
Deep learning methods can be used to produce control policies, but certifying their safety is challenging. The resulting networks are nonlinear and often very large. In response to this challenge, we present OVERT: a sound algorithm for safety verification of nonlinear discrete-time closed loop dynamical systems with neural network control policies. The novelty of OVERT lies in combining ideas from the classical formal methods literature with ideas from the newer neural network verification literature. The central concept of OVERT is to abstract nonlinear functions with a set of optimally tight piecewise linear bounds. Such piecewise linear bounds are designed for seamless integration into ReLU neural network verification tools. OVERT can be used to prove bounded-time safety properties by either computing reachable sets or solving feasibility queries directly. We demonstrate various examples of safety verification for several classical benchmark examples. OVERT compares favorably to existing methods both in computation time and in tightness of the reachable set.
Cite
Text
Sidrane et al. "OVERT: An Algorithm for Safety Verification of Neural Network Control Policies for Nonlinear Systems." Journal of Machine Learning Research, 2022.Markdown
[Sidrane et al. "OVERT: An Algorithm for Safety Verification of Neural Network Control Policies for Nonlinear Systems." Journal of Machine Learning Research, 2022.](https://mlanthology.org/jmlr/2022/sidrane2022jmlr-overt/)BibTeX
@article{sidrane2022jmlr-overt,
title = {{OVERT: An Algorithm for Safety Verification of Neural Network Control Policies for Nonlinear Systems}},
author = {Sidrane, Chelsea and Maleki, Amir and Irfan, Ahmed and Kochenderfer, Mykel J.},
journal = {Journal of Machine Learning Research},
year = {2022},
pages = {1-45},
volume = {23},
url = {https://mlanthology.org/jmlr/2022/sidrane2022jmlr-overt/}
}