On the Verification of Neural ODEs with Stochastic Guarantees

Abstract

We show that Neural ODEs, an emerging class of time-continuous neural networks, can be verified by solving a set of global-optimization problems. For this purpose, we introduce Stochastic Lagrangian Reachability (SLR), an abstraction-based technique for constructing a tight Reachtube (an over-approximation of the set of reachable states over a given time-horizon), and provide stochastic guarantees in the form of confidence intervals for the Reachtube bounds. SLR inherently avoids the infamous wrapping effect (accumulation of over-approximation errors) by performing local optimization steps to expand safe regions instead of repeatedly forward-propagating them as is done by deterministic reachability methods. To enable fast local optimizations, we introduce a novel forward-mode adjoint sensitivity method to compute gradients without the need for backpropagation. Finally, we establish asymptotic and non-asymptotic convergence rates for SLR.

Cite

Text

Gruenbacher et al. "On the Verification of Neural ODEs with Stochastic Guarantees." AAAI Conference on Artificial Intelligence, 2021. doi:10.1609/AAAI.V35I13.17372

Markdown

[Gruenbacher et al. "On the Verification of Neural ODEs with Stochastic Guarantees." AAAI Conference on Artificial Intelligence, 2021.](https://mlanthology.org/aaai/2021/gruenbacher2021aaai-verification/) doi:10.1609/AAAI.V35I13.17372

BibTeX

@inproceedings{gruenbacher2021aaai-verification,
  title     = {{On the Verification of Neural ODEs with Stochastic Guarantees}},
  author    = {Gruenbacher, Sophie and Hasani, Ramin M. and Lechner, Mathias and Cyranka, Jacek and Smolka, Scott A. and Grosu, Radu},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2021},
  pages     = {11525-11535},
  doi       = {10.1609/AAAI.V35I13.17372},
  url       = {https://mlanthology.org/aaai/2021/gruenbacher2021aaai-verification/}
}